isaaclab.sensors — Isaac Lab Documentation (original) (raw)
isaaclab.sensors#
Sub-package containing various sensor classes implementations.
This subpackage contains the sensor classes that are compatible with Isaac Sim. We include both USD-based and custom sensors:
- USD-prim sensors: Available in Omniverse and require creating a USD prim for them. For instance, RTX ray tracing camera and lidar sensors.
- USD-schema sensors: Available in Omniverse and require creating a USD schema on an existing prim. For instance, contact sensors and frame transformers.
- Custom sensors: Implemented in Python and do not require creating any USD prim or schema. For instance, warp-based ray-casters.
Due to the above categorization, the prim paths passed to the sensor’s configuration class are interpreted differently based on the sensor type. The following table summarizes the interpretation of the prim paths for different sensor types:
Submodules
Classes
Sensor Base#
class isaaclab.sensors.SensorBase[source]#
The base class for implementing a sensor.
The implementation is based on lazy evaluation. The sensor data is only updated when the user tries accessing the data through the data property or sets force_compute=True in the update() method. This is done to avoid unnecessary computation when the sensor data is not used.
The sensor is updated at the specified update period. If the update period is zero, then the sensor is updated at every simulation step.
Methods:
Attributes:
__init__(cfg: SensorBaseCfg)[source]#
Initialize the sensor class.
Parameters:
cfg – The configuration parameters for the sensor.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
Memory device for computation.
abstract property data_: Any_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
set_debug_vis(debug_vis: bool) → bool[source]#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
class isaaclab.sensors.SensorBaseCfg[source]#
Configuration parameters for a sensor.
Attributes:
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
USD Camera#
class isaaclab.sensors.Camera[source]#
Bases: SensorBase
The camera sensor for acquiring visual data.
This class wraps over the UsdGeom Camera for providing a consistent API for acquiring visual data. It ensures that the camera follows the ROS convention for the coordinate system.
Summarizing from the replicator extension, the following sensor types are supported:
"rgb": A 3-channel rendered color image."rgba": A 4-channel rendered color image with alpha channel."distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."depth": The same as"distance_to_image_plane"."normals": An image containing the local surface normal vectors at each pixel."motion_vectors": An image containing the motion vector data at each pixel."semantic_segmentation": The semantic segmentation data."instance_segmentation_fast": The instance segmentation data."instance_id_segmentation_fast": The instance id segmentation data.
Note
Currently the following sensor types are not supported in a “view” format:
"instance_segmentation": The instance segmentation data. Please use the fast counterparts instead."instance_id_segmentation": The instance id segmentation data. Please use the fast counterparts instead."bounding_box_2d_tight": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_tight_fast": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_loose": The loose 2D bounding box data (contains occluded regions)."bounding_box_2d_loose_fast": The loose 2D bounding box data (contains occluded regions)."bounding_box_3d": The 3D view space bounding box data."bounding_box_3d_fast": The 3D view space bounding box data.
Attributes:
Methods:
The configuration parameters.
UNSUPPORTED_TYPES_: set[str]_ = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#
The set of sensor types that are not supported by the camera class.
__init__(cfg: CameraCfg)[source]#
Initializes the camera sensor.
Parameters:
cfg – The configuration parameters.
Raises:
- RuntimeError – If no camera prim is found at the given path.
- ValueError – If the provided data types are not supported by the camera.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
property data_: CameraData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
property frame_: torch.tensor_#
Frame number when the measurement took place.
property render_product_paths_: list[str]_#
The path of the render products for the cameras.
This can be used via replicator interfaces to attach to writes or external annotator registry.
property image_shape_: tuple[int, int]_#
A tuple containing (height, width) of the camera sensor.
set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float | None = None, env_ids: Sequence[int] | None = None)[source]#
Set parameters of the USD camera from its intrinsic matrix.
The intrinsic matrix is used to set the following parameters to the USD camera:
focal_length: The focal length of the camera.horizontal_aperture: The horizontal aperture of the camera.vertical_aperture: The vertical aperture of the camera.horizontal_aperture_offset: The horizontal offset of the camera.vertical_aperture_offset: The vertical offset of the camera.
Warning
Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.
Parameters:
- matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
- focal_length – Perspective focal length (in cm) used to calculate pixel size. Defaults to None. If None, focal_length will be calculated 1 / width.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See isaaclab.sensors.camera.utils.convert_camera_frame_orientation_convention() for more details on the conventions.
Parameters:
- positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
- orientations – The quaternion orientation in (w, x, y, z). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- convention – The convention in which the poses are fed. Defaults to “ros”.
Raises:
RuntimeError – If the camera prim is not set. Need to call initialize() method first.
set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#
Set the poses of the camera from the eye position and look-at target position.
Parameters:
- eyes – The positions of the camera’s eye. Shape is (N, 3).
- targets – The target locations to look at. Shape is (N, 3).
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
Raises:
- RuntimeError – If the camera prim is not set. Need to call
initialize()method first. - NotImplementedError – If the stage up-axis is not “Y” or “Z”.
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
Memory device for computation.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
class isaaclab.sensors.CameraData[source]#
Data container for the camera sensor.
Attributes:
pos_w_: torch.Tensor_ = None#
Position of the sensor origin in world frame, following ROS convention.
Shape is (N, 3) where N is the number of sensors.
quat_w_world_: torch.Tensor_ = None#
Quaternion orientation (w, x, y, z) of the sensor origin in world frame, following the world coordinate frame
Note
World frame convention follows the camera aligned with forward axis +X and up axis +Z.
Shape is (N, 4) where N is the number of sensors.
image_shape_: tuple[int, int]_ = None#
A tuple containing (height, width) of the camera sensor.
intrinsic_matrices_: torch.Tensor_ = None#
The intrinsic matrices for the camera.
Shape is (N, 3, 3) where N is the number of sensors.
output_: dict[str, torch.Tensor]_ = None#
The retrieved sensor data with sensor types as key.
The format of the data is available in the Replicator Documentation. For semantic-based data, this corresponds to the "data" key in the output of the sensor.
info_: list[dict[str, Any]]_ = None#
The retrieved sensor info with sensor types as key.
This contains extra information provided by the sensor such as semantic segmentation label mapping, prim paths. For semantic-based data, this corresponds to the "info" key in the output of the sensor. For other sensor types, the info is empty.
property quat_w_ros_: torch.Tensor_#
Quaternion orientation (w, x, y, z) of the sensor origin in the world frame, following ROS convention.
Note
ROS convention follows the camera aligned with forward axis +Z and up axis -Y.
Shape is (N, 4) where N is the number of sensors.
property quat_w_opengl_: torch.Tensor_#
Quaternion orientation (w, x, y, z) of the sensor origin in the world frame, following Opengl / USD Camera convention.
Note
OpenGL convention follows the camera aligned with forward axis -Z and up axis +Y.
Shape is (N, 4) where N is the number of sensors.
class isaaclab.sensors.CameraCfg[source]#
Bases: SensorBaseCfg
Configuration for a camera sensor.
Attributes:
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
Note
The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path /World/envs/env_0/Robot/Camera is /World/envs/env_0/Robot.
spawn_: PinholeCameraCfg | FisheyeCameraCfg | None_#
Spawn configuration for the asset.
If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.
depth_clipping_behavior_: Literal['max', 'zero', 'none']_#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinf.
List of sensor names/types to enable for the camera. Defaults to [“rgb”].
Please refer to the Camera class for a list of available data types.
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
Width of the image in pixels.
Height of the image in pixels.
update_latest_camera_pose_: bool_#
Whether to update the latest camera pose when fetching the camera’s data. Defaults to False.
If True, the latest camera pose is updated in the camera’s data which will slow down performance due to the use of XformPrimView. If False, the pose of the camera during initialization is returned.
semantic_filter_: str | list[str]_#
A string or a list specifying a semantic filter predicate. Defaults to "*:*".
If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:
"typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE": All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”."typeA : * ; * : labelA": All prims with semantic type “typeA” or with label “labelA”
If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.
colorize_semantic_segmentation_: bool_#
Whether to colorize the semantic segmentation images. Defaults to True.
If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.
colorize_instance_id_segmentation_: bool_#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.
colorize_instance_segmentation_: bool_#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.
semantic_segmentation_mapping_: dict_#
Dictionary mapping semantics to specific colours
Eg.
{ "class:cube_1": (255, 36, 66, 255), "class:cube_2": (255, 184, 48, 255), "class:cube_3": (55, 255, 139, 255), "class:table": (255, 237, 218, 255), "class:ground": (100, 100, 100, 255), "class:robot": (61, 178, 255, 255), }
Tile-Rendered USD Camera#
class isaaclab.sensors.TiledCamera[source]#
Bases: Camera
The tiled rendering based camera sensor for acquiring the same data as the Camera class.
This class inherits from the Camera class but uses the tiled-rendering API to acquire the visual data. Tiled-rendering concatenates the rendered images from multiple cameras into a single image. This allows for rendering multiple cameras in parallel and is useful for rendering large scenes with multiple cameras efficiently.
The following sensor types are supported:
"rgb": A 3-channel rendered color image."rgba": A 4-channel rendered color image with alpha channel."distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."depth": Alias for"distance_to_image_plane"."normals": An image containing the local surface normal vectors at each pixel."motion_vectors": An image containing the motion vector data at each pixel."semantic_segmentation": The semantic segmentation data."instance_segmentation_fast": The instance segmentation data."instance_id_segmentation_fast": The instance id segmentation data.
Note
Currently the following sensor types are not supported in a “view” format:
"instance_segmentation": The instance segmentation data. Please use the fast counterparts instead."instance_id_segmentation": The instance id segmentation data. Please use the fast counterparts instead."bounding_box_2d_tight": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_tight_fast": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_loose": The loose 2D bounding box data (contains occluded regions)."bounding_box_2d_loose_fast": The loose 2D bounding box data (contains occluded regions)."bounding_box_3d": The 3D view space bounding box data."bounding_box_3d_fast": The 3D view space bounding box data.
New in version v1.0.0: This feature is available starting from Isaac Sim 4.2. Before this version, the tiled rendering APIs were not available.
Attributes:
Methods:
cfg_: TiledCameraCfg_#
The configuration parameters.
__init__(cfg: TiledCameraCfg)[source]#
Initializes the tiled camera sensor.
Parameters:
cfg – The configuration parameters.
Raises:
- RuntimeError – If no camera prim is found at the given path.
- RuntimeError – If Isaac Sim version < 4.2
- ValueError – If the provided data types are not supported by the camera.
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
UNSUPPORTED_TYPES_: set[str]_ = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#
The set of sensor types that are not supported by the camera class.
property data_: CameraData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
Memory device for computation.
property frame_: torch.tensor_#
Frame number when the measurement took place.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property image_shape_: tuple[int, int]_#
A tuple containing (height, width) of the camera sensor.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
property render_product_paths_: list[str]_#
The path of the render products for the cameras.
This can be used via replicator interfaces to attach to writes or external annotator registry.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float | None = None, env_ids: Sequence[int] | None = None)#
Set parameters of the USD camera from its intrinsic matrix.
The intrinsic matrix is used to set the following parameters to the USD camera:
focal_length: The focal length of the camera.horizontal_aperture: The horizontal aperture of the camera.vertical_aperture: The vertical aperture of the camera.horizontal_aperture_offset: The horizontal offset of the camera.vertical_aperture_offset: The vertical offset of the camera.
Warning
Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.
Parameters:
- matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
- focal_length – Perspective focal length (in cm) used to calculate pixel size. Defaults to None. If None, focal_length will be calculated 1 / width.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See isaaclab.sensors.camera.utils.convert_camera_frame_orientation_convention() for more details on the conventions.
Parameters:
- positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
- orientations – The quaternion orientation in (w, x, y, z). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- convention – The convention in which the poses are fed. Defaults to “ros”.
Raises:
RuntimeError – If the camera prim is not set. Need to call initialize() method first.
set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)#
Set the poses of the camera from the eye position and look-at target position.
Parameters:
- eyes – The positions of the camera’s eye. Shape is (N, 3).
- targets – The target locations to look at. Shape is (N, 3).
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
Raises:
- RuntimeError – If the camera prim is not set. Need to call
initialize()method first. - NotImplementedError – If the stage up-axis is not “Y” or “Z”.
class isaaclab.sensors.TiledCameraCfg[source]#
Bases: CameraCfg
Configuration for a tiled rendering-based camera sensor.
Classes:
Attributes:
class OffsetCfg#
Bases: object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
pos_: tuple[float, float, float]_#
Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).
rot_: tuple[float, float, float, float]_#
Quaternion rotation (w, x, y, z) w.r.t. the parent frame. Defaults to (1.0, 0.0, 0.0, 0.0).
convention_: Literal['opengl', 'ros', 'world']_#
The convention in which the frame offset is applied. Defaults to “ros”.
"opengl"- forward axis:-Z- up axis:+Y- Offset is applied in the OpenGL (Usd.Camera) convention."ros"- forward axis:+Z- up axis:-Y- Offset is applied in the ROS convention."world"- forward axis:+X- up axis:+Z- Offset is applied in the World Frame convention.
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
Note
The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path /World/envs/env_0/Robot/Camera is /World/envs/env_0/Robot.
spawn_: PinholeCameraCfg | FisheyeCameraCfg | None_#
Spawn configuration for the asset.
If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.
depth_clipping_behavior_: Literal['max', 'zero', 'none']_#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinf.
List of sensor names/types to enable for the camera. Defaults to [“rgb”].
Please refer to the Camera class for a list of available data types.
Width of the image in pixels.
Height of the image in pixels.
update_latest_camera_pose_: bool_#
Whether to update the latest camera pose when fetching the camera’s data. Defaults to False.
If True, the latest camera pose is updated in the camera’s data which will slow down performance due to the use of XformPrimView. If False, the pose of the camera during initialization is returned.
semantic_filter_: str | list[str]_#
A string or a list specifying a semantic filter predicate. Defaults to "*:*".
If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:
"typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE": All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”."typeA : * ; * : labelA": All prims with semantic type “typeA” or with label “labelA”
If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.
colorize_semantic_segmentation_: bool_#
Whether to colorize the semantic segmentation images. Defaults to True.
If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.
colorize_instance_id_segmentation_: bool_#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.
colorize_instance_segmentation_: bool_#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.
semantic_segmentation_mapping_: dict_#
Dictionary mapping semantics to specific colours
Eg.
{ "class:cube_1": (255, 36, 66, 255), "class:cube_2": (255, 184, 48, 255), "class:cube_3": (55, 255, 139, 255), "class:table": (255, 237, 218, 255), "class:ground": (100, 100, 100, 255), "class:robot": (61, 178, 255, 255), }
Contact Sensor#
class isaaclab.sensors.ContactSensor[source]#
Bases: SensorBase
A contact reporting sensor.
The contact sensor reports the normal contact forces on a rigid body in the world frame. It relies on the PhysX ContactReporter API to be activated on the rigid bodies.
To enable the contact reporter on a rigid body, please make sure to enable theisaaclab.sim.spawner.RigidObjectSpawnerCfg.activate_contact_sensors on your asset spawner configuration. This will enable the contact reporter on all the rigid bodies in the asset.
The sensor can be configured to report the contact forces on a set of bodies with a given filter pattern using the ContactSensorCfg.filter_prim_paths_expr. This is useful when you want to report the contact forces between the sensor bodies and a specific set of bodies in the scene. The data can be accessed using the ContactSensorData.force_matrix_w. Please check the documentation on RigidContact for more details.
The reporting of the filtered contact forces is only possible as one-to-many. This means that only one sensor body in an environment can be filtered against multiple bodies in that environment. If you need to filter multiple sensor bodies against multiple bodies, you need to create separate sensors for each sensor body.
As an example, suppose you want to report the contact forces for all the feet of a robot against an object exclusively. In that case, setting the ContactSensorCfg.prim_path andContactSensorCfg.filter_prim_paths_expr with {ENV_REGEX_NS}/Robot/.*_FOOT and {ENV_REGEX_NS}/Objectrespectively will not work. Instead, you need to create a separate sensor for each foot and filter it against the object.
Attributes:
Methods:
cfg_: ContactSensorCfg_#
The configuration parameters.
__init__(cfg: ContactSensorCfg)[source]#
Initializes the contact sensor object.
Parameters:
cfg – The configuration parameters.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
property data_: ContactSensorData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
Number of bodies with contact sensors attached.
property body_names_: list[str]_#
Ordered names of bodies with contact sensors attached.
property body_physx_view_: omni.physics.tensors.impl.api.RigidBodyView_#
View for the rigid bodies captured (PhysX).
Note
Use this view with caution. It requires handling of tensors in a specific way.
property contact_physx_view_: omni.physics.tensors.impl.api.RigidContactView_#
Contact reporter view for the bodies (PhysX).
Note
Use this view with caution. It requires handling of tensors in a specific way.
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
find_bodies(name_keys: str | Sequence[str], preserve_order: bool = False) → tuple[list[int], list[str]][source]#
Find bodies in the articulation based on the name keys.
Parameters:
- name_keys – A regular expression or a list of regular expressions to match the body names.
- preserve_order – Whether to preserve the order of the name keys in the output. Defaults to False.
Returns:
A tuple of lists containing the body indices and names.
compute_first_contact(dt: float, abs_tol: float = 1e-08) → torch.Tensor[source]#
Checks if bodies that have established contact within the last dt seconds.
This function checks if the bodies have established contact within the last dt seconds by comparing the current contact time with the given time period. If the contact time is less than the given time period, then the bodies are considered to be in contact.
Note
The function assumes that dt is a factor of the sensor update time-step. In other words \(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.
Parameters:
- dt – The time period since the contact was established.
- abs_tol – The absolute tolerance for the comparison.
Returns:
A boolean tensor indicating the bodies that have established contact within the lastdt seconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Raises:
RuntimeError – If the sensor is not configured to track contact time.
compute_first_air(dt: float, abs_tol: float = 1e-08) → torch.Tensor[source]#
Checks if bodies that have broken contact within the last dt seconds.
This function checks if the bodies have broken contact within the last dt seconds by comparing the current air time with the given time period. If the air time is less than the given time period, then the bodies are considered to not be in contact.
Note
It assumes that dt is a factor of the sensor update time-step. In other words,\(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.
Parameters:
- dt – The time period since the contract is broken.
- abs_tol – The absolute tolerance for the comparison.
Returns:
A boolean tensor indicating the bodies that have broken contact within the last dt seconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Raises:
RuntimeError – If the sensor is not configured to track contact time.
Memory device for computation.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
class isaaclab.sensors.ContactSensorData[source]#
Data container for the contact reporting sensor.
Attributes:
pos_w_: torch.Tensor | None_ = None#
Position of the sensor origin in world frame.
Shape is (N, 3), where N is the number of sensors.
Note
If the ContactSensorCfg.track_pose is False, then this quantity is None.
contact_pos_w_: torch.Tensor | None_ = None#
Average of the positions of contact points between sensor body and filter prim in world frame.
Shape is (N, B, M, 3), where N is the number of sensors, B is number of bodies in each sensor and M is the number of filtered bodies.
Collision pairs not in contact will result in NaN.
Note
- If the ContactSensorCfg.track_contact_points is False, then this quantity is None.
- If the ContactSensorCfg.track_contact_points is True, a ValueError will be raised if:
- If the ContactSensorCfg.filter_prim_paths_expr is empty.
- If the
ContactSensorCfg.max_contact_data_per_primis not specified or less than 1. will not be calculated.
friction_forces_w_: torch.Tensor | None_ = None#
Sum of the friction forces between sensor body and filter prim in world frame.
Shape is (N, B, M, 3), where N is the number of sensors, B is number of bodies in each sensor and M is the number of filtered bodies.
Collision pairs not in contact will result in NaN.
Note
- If the ContactSensorCfg.track_friction_forces is False, then this quantity is None.
- If the ContactSensorCfg.track_friction_forces is True, a ValueError will be raised if:
- The ContactSensorCfg.filter_prim_paths_expr is empty.
- The
ContactSensorCfg.max_contact_data_per_primis not specified or less than 1.
quat_w_: torch.Tensor | None_ = None#
Orientation of the sensor origin in quaternion (w, x, y, z) in world frame.
Shape is (N, 4), where N is the number of sensors.
Note
If the ContactSensorCfg.track_pose is False, then this quantity is None.
net_forces_w_: torch.Tensor | None_ = None#
The net normal contact forces in world frame.
Shape is (N, B, 3), where N is the number of sensors and B is the number of bodies in each sensor.
Note
This quantity is the sum of the normal contact forces acting on the sensor bodies. It must not be confused with the total contact forces acting on the sensor bodies (which also includes the tangential forces).
net_forces_w_history_: torch.Tensor | None_ = None#
The net normal contact forces in world frame.
Shape is (N, T, B, 3), where N is the number of sensors, T is the configured history length and B is the number of bodies in each sensor.
In the history dimension, the first index is the most recent and the last index is the oldest.
Note
This quantity is the sum of the normal contact forces acting on the sensor bodies. It must not be confused with the total contact forces acting on the sensor bodies (which also includes the tangential forces).
force_matrix_w_: torch.Tensor | None_ = None#
The normal contact forces filtered between the sensor bodies and filtered bodies in world frame.
Shape is (N, B, M, 3), where N is the number of sensors, B is number of bodies in each sensor and M is the number of filtered bodies.
Note
If the ContactSensorCfg.filter_prim_paths_expr is empty, then this quantity is None.
force_matrix_w_history_: torch.Tensor | None_ = None#
The normal contact forces filtered between the sensor bodies and filtered bodies in world frame.
Shape is (N, T, B, M, 3), where N is the number of sensors, T is the configured history length, B is number of bodies in each sensor and M is the number of filtered bodies.
In the history dimension, the first index is the most recent and the last index is the oldest.
Note
If the ContactSensorCfg.filter_prim_paths_expr is empty, then this quantity is None.
last_air_time_: torch.Tensor | None_ = None#
Time spent (in s) in the air before the last contact.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the ContactSensorCfg.track_air_time is False, then this quantity is None.
current_air_time_: torch.Tensor | None_ = None#
Time spent (in s) in the air since the last detach.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the ContactSensorCfg.track_air_time is False, then this quantity is None.
last_contact_time_: torch.Tensor | None_ = None#
Time spent (in s) in contact before the last detach.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the ContactSensorCfg.track_air_time is False, then this quantity is None.
current_contact_time_: torch.Tensor | None_ = None#
Time spent (in s) in contact since the last contact.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the ContactSensorCfg.track_air_time is False, then this quantity is None.
class isaaclab.sensors.ContactSensorCfg[source]#
Bases: SensorBaseCfg
Configuration for the contact sensor.
Attributes:
Whether to track the pose of the sensor’s origin. Defaults to False.
Whether to track the contact point locations. Defaults to False.
track_friction_forces_: bool_#
Whether to track the friction forces at the contact points. Defaults to False.
max_contact_data_count_per_prim_: int_#
The maximum number of contacts across all batches of the sensor to keep track of. Default is 4.
This parameter sets the total maximum counts of the simulation across all bodies and environments. The total number of contacts allowed is max_contact_data_count_per_prim*num_envs*num_sensor_bodies.
Note
If the environment is very contact rich it is suggested to increase this parameter to avoid out of bounds memory errors and loss of contact data leading to inaccurate measurements.
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
Whether to track the air/contact time of the bodies (time between contacts). Defaults to False.
The threshold on the norm of the contact force that determines whether two bodies are in collision or not.
This value is only used for tracking the mode duration (the time in contact or in air), if track_air_time is True.
filter_prim_paths_expr_: list[str]_#
The list of primitive paths (or expressions) to filter contacts with. Defaults to an empty list, in which case no filtering is applied.
The contact sensor allows reporting contacts between the primitive specified with prim_path and other primitives in the scene. For instance, in a scene containing a robot, a ground plane and an object, you can obtain individual contact reports of the base of the robot with the ground plane and the object.
Note
The expression in the list can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Object will be replaced with /World/envs/env_.*/Object.
Attention
The reporting of filtered contacts only works when the sensor primitive prim_path corresponds to a single primitive in that environment. If the sensor primitive corresponds to multiple primitives, the filtering will not work as expected. Please check ContactSensorfor more details. If track_contact_points is true, then filter_prim_paths_expr cannot be an empty list!
visualizer_cfg_: VisualizationMarkersCfg_#
The configuration object for the visualization markers. Defaults to CONTACT_SENSOR_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Frame Transformer#
class isaaclab.sensors.FrameTransformer[source]#
Bases: SensorBase
A sensor for reporting frame transforms.
This class provides an interface for reporting the transform of one or more frames (target frames) with respect to another frame (source frame). The source frame is specified by the user as a prim path (FrameTransformerCfg.prim_path) and the target frames are specified by the user as a list of prim paths (FrameTransformerCfg.target_frames).
The source frame and target frames are assumed to be rigid bodies. The transform of the target frames with respect to the source frame is computed by first extracting the transform of the source frame and target frames from the physics engine and then computing the relative transform between the two.
Additionally, the user can specify an offset for the source frame and each target frame. This is useful for specifying the transform of the desired frame with respect to the body’s center of mass, for instance.
A common example of using this sensor is to track the position and orientation of the end effector of a robotic manipulator. In this case, the source frame would be the body corresponding to the base frame of the manipulator, and the target frame would be the body corresponding to the end effector. Since the end-effector is typically a fictitious body, the user may need to specify an offset from the end-effector to the body of the manipulator.
Attributes:
Methods:
cfg_: FrameTransformerCfg_#
The configuration parameters.
__init__(cfg: FrameTransformerCfg)[source]#
Initializes the frame transformer object.
Parameters:
cfg – The configuration parameters.
property data_: FrameTransformerData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
Returns the number of target bodies being tracked.
Note
This is an alias used for consistency with other sensors. Otherwise, we recommend usinglen(data.target_frame_names) to access the number of target frames.
property body_names_: list[str]_#
Returns the names of the target bodies being tracked.
Note
This is an alias used for consistency with other sensors. Otherwise, we recommend usingdata.target_frame_names to access the target frame names.
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
find_bodies(name_keys: str | Sequence[str], preserve_order: bool = False) → tuple[list[int], list[str]][source]#
Find bodies in the articulation based on the name keys.
Parameters:
- name_keys – A regular expression or a list of regular expressions to match the body names.
- preserve_order – Whether to preserve the order of the name keys in the output. Defaults to False.
Returns:
A tuple of lists containing the body indices and names.
Memory device for computation.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
class isaaclab.sensors.FrameTransformerData[source]#
Data container for the frame transformer sensor.
Attributes:
target_frame_names_: list[str]_ = None#
Target frame names (this denotes the order in which that frame data is ordered).
The frame names are resolved from the FrameTransformerCfg.FrameCfg.name field. This does not necessarily follow the order in which the frames are defined in the config due to the regex matching.
target_pos_source_: torch.Tensor_ = None#
Position of the target frame(s) relative to source frame.
Shape is (N, M, 3), where N is the number of environments, and M is the number of target frames.
target_quat_source_: torch.Tensor_ = None#
Orientation of the target frame(s) relative to source frame quaternion (w, x, y, z).
Shape is (N, M, 4), where N is the number of environments, and M is the number of target frames.
target_pos_w_: torch.Tensor_ = None#
Position of the target frame(s) after offset (in world frame).
Shape is (N, M, 3), where N is the number of environments, and M is the number of target frames.
target_quat_w_: torch.Tensor_ = None#
Orientation of the target frame(s) after offset (in world frame) quaternion (w, x, y, z).
Shape is (N, M, 4), where N is the number of environments, and M is the number of target frames.
source_pos_w_: torch.Tensor_ = None#
Position of the source frame after offset (in world frame).
Shape is (N, 3), where N is the number of environments.
source_quat_w_: torch.Tensor_ = None#
Orientation of the source frame after offset (in world frame) quaternion (w, x, y, z).
Shape is (N, 4), where N is the number of environments.
class isaaclab.sensors.FrameTransformerCfg[source]#
Bases: SensorBaseCfg
Configuration for the frame transformer sensor.
Classes:
Attributes:
Bases: object
Information specific to a coordinate frame.
Attributes:
The prim path corresponding to a rigid body.
This can be a regex pattern to match multiple prims. For example, “/Robot/.*” will match all prims under “/Robot”.
This means that if the source FrameTransformerCfg.prim_path is “/Robot/base”, and the target FrameTransformerCfg.FrameCfg.prim_path is “/Robot/.*”, then the frame transformer will track the poses of all the prims under “/Robot”, including “/Robot/base” (even though this will result in an identity pose w.r.t. the source frame).
User-defined name for the new coordinate frame. Defaults to None.
If None, then the name is extracted from the leaf of the prim path.
The pose offset from the parent prim frame.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
The prim path of the body to transform from (source frame).
source_frame_offset_: OffsetCfg_#
The pose offset from the source prim frame.
target_frames_: list[isaaclab.sensors.frame_transformer.frame_transformer_cfg.FrameTransformerCfg.FrameCfg]_#
A list of the target frames.
This allows a single FrameTransformer to handle multiple target prims. For example, in a quadruped, we can use a single FrameTransformer to track each foot’s position and orientation in the body frame using four frame offsets.
visualizer_cfg_: VisualizationMarkersCfg_#
The configuration object for the visualization markers. Defaults to FRAME_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
class isaaclab.sensors.OffsetCfg[source]#
The offset pose of one frame relative to another frame.
Attributes:
pos_: tuple[float, float, float]_#
Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).
rot_: tuple[float, float, float, float]_#
Quaternion rotation (w, x, y, z) w.r.t. the parent frame. Defaults to (1.0, 0.0, 0.0, 0.0).
Ray-Cast Sensor#
class isaaclab.sensors.RayCaster[source]#
Bases: SensorBase
A ray-casting sensor.
The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor can be configured to ray-cast against a set of meshes with a given ray pattern.
The meshes are parsed from the list of primitive paths provided in the configuration. These are then converted to warp meshes and stored in the warp_meshes list. The ray-caster then ray-casts against these warp meshes using the ray pattern provided in the configuration.
Note
Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.
Attributes:
Methods:
cfg_: RayCasterCfg_#
The configuration parameters.
meshes_: ClassVar[dict[str, wp.Mesh]]_ = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
__init__(cfg: RayCasterCfg)[source]#
Initializes the ray-caster object.
Parameters:
cfg – The configuration parameters.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
property data_: RayCasterData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
Memory device for computation.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
class isaaclab.sensors.RayCasterData[source]#
Data container for the ray-cast sensor.
Attributes:
pos_w_: torch.Tensor_ = None#
Position of the sensor origin in world frame.
Shape is (N, 3), where N is the number of sensors.
quat_w_: torch.Tensor_ = None#
Orientation of the sensor origin in quaternion (w, x, y, z) in world frame.
Shape is (N, 4), where N is the number of sensors.
ray_hits_w_: torch.Tensor_ = None#
The ray hit positions in the world frame.
Shape is (N, B, 3), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.
class isaaclab.sensors.RayCasterCfg[source]#
Bases: SensorBaseCfg
Configuration for the ray-cast sensor.
Classes:
Attributes:
Bases: object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
pos_: tuple[float, float, float]_#
Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).
rot_: tuple[float, float, float, float]_#
Quaternion rotation (w, x, y, z) w.r.t. the parent frame. Defaults to (1.0, 0.0, 0.0, 0.0).
The list of mesh primitive paths to ray cast against.
Note
Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
attach_yaw_only_: bool | None_#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please useray_alignment instead.
To get the same behavior as setting this parameter to True or False, setray_alignment to "yaw" or “base” respectively.
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
ray_alignment_: Literal['base', 'yaw', 'world']_#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
pattern_cfg_: PatternBaseCfg_#
The pattern that defines the local ray starting positions and directions.
Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.
drift_range_: tuple[float, float]_#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
ray_cast_drift_range_: dict[str, tuple[float, float]]_#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
visualizer_cfg_: VisualizationMarkersCfg_#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Ray-Cast Camera#
class isaaclab.sensors.RayCasterCamera[source]#
Bases: RayCaster
A ray-casting camera sensor.
The ray-caster camera uses a set of rays to get the distances to meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor has the same interface as theisaaclab.sensors.Camera that implements the camera class through USD camera prims. However, this class provides a faster image generation. The sensor converts meshes from the list of primitive paths provided in the configuration to Warp meshes. The camera then ray-casts against these Warp meshes only.
Currently, only the following annotators are supported:
"distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."normals": An image containing the local surface normal vectors at each pixel.
Note
Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.
Attributes:
Methods:
cfg_: RayCasterCameraCfg_#
The configuration parameters.
UNSUPPORTED_TYPES_: ClassVar[set[str]]_ = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_id_segmentation_fast', 'instance_segmentation', 'instance_segmentation_fast', 'motion_vectors', 'rgb', 'semantic_segmentation', 'skeleton_data'}#
A set of sensor types that are not supported by the ray-caster camera.
__init__(cfg: RayCasterCameraCfg)[source]#
Initializes the camera object.
Parameters:
cfg – The configuration parameters.
Raises:
ValueError – If the provided data types are not supported by the ray-caster camera.
property data_: CameraData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
property image_shape_: tuple[int, int]_#
A tuple containing (height, width) of the camera sensor.
property frame_: torch.tensor_#
Frame number when the measurement took place.
set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)[source]#
Set the intrinsic matrix of the camera.
Parameters:
- matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
- focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See isaaclab.utils.maths.convert_camera_frame_orientation_convention() for more details on the conventions.
Parameters:
- positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
- orientations – The quaternion orientation in (w, x, y, z). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- convention – The convention in which the poses are fed. Defaults to “ros”.
Raises:
RuntimeError – If the camera prim is not set. Need to call initialize() method first.
set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#
Set the poses of the camera from the eye position and look-at target position.
Parameters:
- eyes – The positions of the camera’s eye. Shape is N, 3).
- targets – The target locations to look at. Shape is (N, 3).
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
Raises:
- RuntimeError – If the camera prim is not set. Need to call
initialize()method first. - NotImplementedError – If the stage up-axis is not “Y” or “Z”.
Memory device for computation.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
meshes_: ClassVar[dict[str, wp.Mesh]]_ = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
class isaaclab.sensors.RayCasterCameraCfg[source]#
Bases: RayCasterCfg
Configuration for the ray-cast sensor.
Attributes:
offset_: OffsetCfg_#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
The list of mesh primitive paths to ray cast against.
Note
Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.
attach_yaw_only_: bool | None_#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please useray_alignment instead.
To get the same behavior as setting this parameter to True or False, setray_alignment to "yaw" or “base” respectively.
ray_alignment_: Literal['base', 'yaw', 'world']_#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.
drift_range_: tuple[float, float]_#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
ray_cast_drift_range_: dict[str, tuple[float, float]]_#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
visualizer_cfg_: VisualizationMarkersCfg_#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
List of sensor names/types to enable for the camera. Defaults to [“distance_to_image_plane”].
depth_clipping_behavior_: Literal['max', 'zero', 'none']_#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinffordistance_to_cameraandnanfordistance_to_image_planedata type.
pattern_cfg_: PinholeCameraPatternCfg_#
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
Multi-Mesh Ray-Cast Sensor#
class isaaclab.sensors.MultiMeshRayCaster[source]#
Bases: RayCaster
A multi-mesh ray-casting sensor.
The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor can be configured to ray-cast against a set of meshes with a given ray pattern.
The meshes are parsed from the list of primitive paths provided in the configuration. These are then converted to warp meshes and stored in the meshes list. The ray-caster then ray-casts against these warp meshes using the ray pattern provided in the configuration.
Compared to the default RayCaster, the MultiMeshRayCaster provides additional functionality and flexibility as an extension of the default RayCaster with the following enhancements:
- Raycasting against multiple target types : Supports primitive shapes (spheres, cubes, etc.) as well as arbitrary meshes.
- Dynamic mesh tracking : Keeps track of specified meshes, enabling raycasting against moving parts (e.g., robot links, articulated bodies, or dynamic obstacles).
- Memory-efficient caching : Avoids redundant memory usage by reusing mesh data across environments.
Example usage to raycast against the visual meshes of a robot (e.g. ANYmal):
ray_caster_cfg = MultiMeshRayCasterCfg( prim_path="{ENV_REGEX_NS}/Robot", mesh_prim_paths=[ "/World/Ground", MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/LF_./visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/RF_./visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/LH_./visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/RH_./visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/base/visuals"), ], ray_alignment="world", pattern_cfg=patterns.GridPatternCfg(resolution=0.02, size=(2.5, 2.5), direction=(0, 0, -1)), )
Attributes:
Methods:
cfg_: MultiMeshRayCasterCfg_#
The configuration parameters.
mesh_views_: ClassVar[dict[str, XFormPrim | physx.ArticulationView | physx.RigidBodyView]]_ = {}#
A dictionary to store mesh views for raycasting, shared across all instances.
The keys correspond to the prim path for the mesh views, and values are the corresponding view objects.
__init__(cfg: MultiMeshRayCasterCfg)[source]#
Initializes the ray-caster object.
Parameters:
cfg – The configuration parameters.
property data_: MultiMeshRayCasterData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
Memory device for computation.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
meshes_: ClassVar[dict[str, wp.Mesh]]_ = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
reset(env_ids: Sequence[int] | None = None)#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
class isaaclab.sensors.MultiMeshRayCasterData[source]#
Data container for the multi-mesh ray-cast sensor.
Attributes:
pos_w_: torch.Tensor_ = None#
Position of the sensor origin in world frame.
Shape is (N, 3), where N is the number of sensors.
quat_w_: torch.Tensor_ = None#
Orientation of the sensor origin in quaternion (w, x, y, z) in world frame.
Shape is (N, 4), where N is the number of sensors.
ray_hits_w_: torch.Tensor_ = None#
The ray hit positions in the world frame.
Shape is (N, B, 3), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.
ray_mesh_ids_: torch.Tensor_ = None#
The mesh ids of the ray hits.
Shape is (N, B, 1), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.
class isaaclab.sensors.MultiMeshRayCasterCfg[source]#
Bases: RayCasterCfg
Configuration for the multi-mesh ray-cast sensor.
Classes:
Attributes:
class RaycastTargetCfg[source]#
Bases: object
Configuration for different ray-cast targets.
Attributes:
The regex to specify the target prim to ray cast against.
Whether the target prim is assumed to be the same mesh across all environments. Defaults to False.
If True, only the first mesh is read and then reused for all environments, rather than re-parsed. This provides a startup performance boost when there are many environments that all use the same asset.
Note
If MultiMeshRayCasterCfg.reference_meshes is False, this flag has no effect.
Whether to merge the parsed meshes for a prim that contains multiple meshes. Defaults to True.
This will create a new mesh that combines all meshes in the parsed prim. The raycast hits mesh IDs will then refer to the single merged mesh.
track_mesh_transforms_: bool_#
Whether the mesh transformations should be tracked. Defaults to True.
Note
Not tracking the mesh transformations is recommended when the meshes are static to increase performance.
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
attach_yaw_only_: bool | None_#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please useray_alignment instead.
To get the same behavior as setting this parameter to True or False, setray_alignment to "yaw" or “base” respectively.
ray_alignment_: Literal['base', 'yaw', 'world']_#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
pattern_cfg_: PatternBaseCfg_#
The pattern that defines the local ray starting positions and directions.
Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.
drift_range_: tuple[float, float]_#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
ray_cast_drift_range_: dict[str, tuple[float, float]]_#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
visualizer_cfg_: VisualizationMarkersCfg_#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
mesh_prim_paths_: list[str | isaaclab.sensors.ray_caster.multi_mesh_ray_caster_cfg.MultiMeshRayCasterCfg.RaycastTargetCfg]_#
The list of mesh primitive paths to ray cast against.
If an entry is a string, it is internally converted to RaycastTargetCfg with ~RaycastTargetCfg.track_mesh_transforms disabled. These settings ensure backwards compatibility with the default raycaster.
Whether to update the mesh ids of the ray hits in the data container.
Whether to reference duplicated meshes instead of loading each one separately into memory. Defaults to True.
When enabled, the raycaster parses all meshes in all environments, but reuses references for duplicates instead of storing multiple copies. This reduces memory footprint.
Multi-Mesh Ray-Cast Camera#
class isaaclab.sensors.MultiMeshRayCasterCamera[source]#
Bases: RayCasterCamera, MultiMeshRayCaster
A multi-mesh ray-casting camera sensor.
The ray-caster camera uses a set of rays to get the distances to meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor has the same interface as theisaaclab.sensors.Camera that implements the camera class through USD camera prims. However, this class provides a faster image generation. The sensor converts meshes from the list of primitive paths provided in the configuration to Warp meshes. The camera then ray-casts against these Warp meshes only.
Currently, only the following annotators are supported:
"distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."normals": An image containing the local surface normal vectors at each pixel.
Attributes:
Methods:
cfg_: MultiMeshRayCasterCameraCfg_#
The configuration parameters.
__init__(cfg: MultiMeshRayCasterCameraCfg)[source]#
Initializes the camera object.
Parameters:
cfg – The configuration parameters.
Raises:
ValueError – If the provided data types are not supported by the ray-caster camera.
UNSUPPORTED_TYPES_: ClassVar[set[str]]_ = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_id_segmentation_fast', 'instance_segmentation', 'instance_segmentation_fast', 'motion_vectors', 'rgb', 'semantic_segmentation', 'skeleton_data'}#
A set of sensor types that are not supported by the ray-caster camera.
property data_: CameraData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
Memory device for computation.
property frame_: torch.tensor_#
Frame number when the measurement took place.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property image_shape_: tuple[int, int]_#
A tuple containing (height, width) of the camera sensor.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
mesh_views_: ClassVar[dict[str, XFormPrim | physx.ArticulationView | physx.RigidBodyView]]_ = {}#
A dictionary to store mesh views for raycasting, shared across all instances.
The keys correspond to the prim path for the mesh views, and values are the corresponding view objects.
meshes_: ClassVar[dict[str, wp.Mesh]]_ = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
reset(env_ids: Sequence[int] | None = None)#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)#
Set the intrinsic matrix of the camera.
Parameters:
- matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
- focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See isaaclab.utils.maths.convert_camera_frame_orientation_convention() for more details on the conventions.
Parameters:
- positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
- orientations – The quaternion orientation in (w, x, y, z). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- convention – The convention in which the poses are fed. Defaults to “ros”.
Raises:
RuntimeError – If the camera prim is not set. Need to call initialize() method first.
set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)#
Set the poses of the camera from the eye position and look-at target position.
Parameters:
- eyes – The positions of the camera’s eye. Shape is N, 3).
- targets – The target locations to look at. Shape is (N, 3).
- env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
Raises:
- RuntimeError – If the camera prim is not set. Need to call
initialize()method first. - NotImplementedError – If the stage up-axis is not “Y” or “Z”.
class isaaclab.sensors.MultiMeshRayCasterCameraCfg[source]#
Bases: RayCasterCameraCfg, MultiMeshRayCasterCfg
Configuration for the multi-mesh ray-cast camera sensor.
Attributes:
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
The list of mesh primitive paths to ray cast against.
If an entry is a string, it is internally converted to RaycastTargetCfg with ~RaycastTargetCfg.track_mesh_transforms disabled. These settings ensure backwards compatibility with the default raycaster.
offset_: OffsetCfg_#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
attach_yaw_only_: bool | None_#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please useray_alignment instead.
To get the same behavior as setting this parameter to True or False, setray_alignment to "yaw" or “base” respectively.
ray_alignment_: Literal['base', 'yaw', 'world']_#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
pattern_cfg_: PinholeCameraPatternCfg_#
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.
drift_range_: tuple[float, float]_#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
ray_cast_drift_range_: dict[str, tuple[float, float]]_#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
visualizer_cfg_: VisualizationMarkersCfg_#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Whether to update the mesh ids of the ray hits in the data container.
Whether to reference duplicated meshes instead of loading each one separately into memory. Defaults to True.
When enabled, the raycaster parses all meshes in all environments, but reuses references for duplicates instead of storing multiple copies. This reduces memory footprint.
List of sensor names/types to enable for the camera. Defaults to [“distance_to_image_plane”].
depth_clipping_behavior_: Literal['max', 'zero', 'none']_#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinffordistance_to_cameraandnanfordistance_to_image_planedata type.
Inertia Measurement Unit#
class isaaclab.sensors.Imu[source]#
Bases: SensorBase
The Inertia Measurement Unit (IMU) sensor.
The sensor can be attached to any prim path with a rigid ancestor in its tree and produces body-frame linear acceleration and angular velocity, along with world-frame pose and body-frame linear and angular accelerations/velocities.
If the provided path is not a rigid body, the closest rigid-body ancestor is used for simulation queries. The fixed transform from that ancestor to the target prim is computed once during initialization and composed with the configured sensor offset.
Note
We are computing the accelerations using numerical differentiation from the velocities. Consequently, the IMU sensor accuracy depends on the chosen phsyx timestep. For a sufficient accuracy, we recommend to keep the timestep at least as 200Hz.
Note
The user can configure the sensor offset in the configuration file. The offset is applied relative to the rigid source prim. If the target prim is not a rigid body, the offset is composed with the fixed transform from the rigid ancestor to the target prim. The offset is applied in the body frame of the rigid source prim. The offset is defined as a position vector and a quaternion rotation, which are applied in the order: position, then rotation. The position is applied as a translation in the body frame of the rigid source prim, and the rotation is applied as a rotation in the body frame of the rigid source prim.
Attributes:
Methods:
The configuration parameters.
__init__(cfg: ImuCfg)[source]#
Initializes the Imu sensor.
Parameters:
cfg – The configuration parameters.
property data_: ImuData_#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
update sensors if needed
self._update_outdated_buffers()
return the data (where _data is the data for the sensor)
return self._data
property num_instances_: int_#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
Parameters:
env_ids – The sensor ids to reset. Defaults to None.
Memory device for computation.
property has_debug_vis_implementation_: bool_#
Whether the sensor has a debug visualization implemented.
property is_initialized_: bool_#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
set_debug_vis(debug_vis: bool) → bool#
Sets whether to visualize the sensor data.
Parameters:
debug_vis – Whether to visualize the sensor data.
Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
class isaaclab.sensors.ImuCfg[source]#
Bases: SensorBaseCfg
Configuration for an Inertial Measurement Unit (IMU) sensor.
Classes:
Attributes:
Bases: object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
pos_: tuple[float, float, float]_#
Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).
rot_: tuple[float, float, float, float]_#
Quaternion rotation (w, x, y, z) w.r.t. the parent frame. Defaults to (1.0, 0.0, 0.0, 0.0).
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.
Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
Whether to visualize the sensor. Defaults to False.
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
visualizer_cfg_: VisualizationMarkersCfg_#
The configuration object for the visualization markers. Defaults to RED_ARROW_X_MARKER_CFG.
This attribute is only used when debug visualization is enabled.
gravity_bias_: tuple[float, float, float]_#
The linear acceleration bias applied to the linear acceleration in the world frame (x,y,z).
Imu sensors typically output a positive gravity acceleration in opposition to the direction of gravity. This config parameter allows users to subtract that bias if set to (0.,0.,0.). By default this is set to (0.0,0.0,9.81) which results in a positive acceleration reading in the world Z.