🔧 Customizing Robots
Robots can have both their action spaces (types of control commands) and observation spaces (types of sensor modalities) customized to suit specific use-cases. This can be done both prior to import time (via a config) or dynamically during runtime. Below, we describe a recommended workflow for modifying both sets of these properties.
Customizing Action Spaces
A robot is equipped with multiple controllers, each of which control a subset of the robot's low-level joint motors. Together, these controllers' inputs form the robot's corresponding action space. For example, a Fetch robot consists of (a) a base controller controlling its two wheels, (b) a head controller controlling its two head joints, (c) an arm controller controlling its seven arm joints, and (d) a gripper controller controlling its two gripper joints (resulting in 13 DOF being controlled). An example set of controllers would be using a DifferentialDriveController for the base, JointControllers for the head and arm, and binary MultiFingerGripperController for the gripper. In this case, the action space size would be 2 + 2 + 7 + 1 = 12. If we were to use an InverseKinematicsController commanding the 6DOF end-effector pose instead of the JointController for the arm, the action space size would be 2 + 2 + 6 + 1 = 11. Each of these controllers can be individual configured and swapped out for each robot.
Modifying Via Config
One way to customize a robot's set of controllers is to manually set the desired controller configuration in the environment config file when creating an OmniGibson environment. An example is shown below:
fetch_controller_cfg.yaml
In the above example, the types of controllers are specified for each component of the robot (base
, arm_0
, gripper_0
, camera
), and additional relevant keyword arguments to pass to the specific controller init calls can also be specified. If a controller or any keyword arguments are not specified for a given component, a default set of values will be used, which are specified in the robot class itself (_default_controller_config
property). Please see the Controllers section for additional details on controller arguments. Do note that if action_normalize=True
is passed as a robot-level kwarg, it will automatically overwrite any command_input_limits
passed via the controller config, since it will assume a normalization range of [-1, 1]
.
Alternatively, if directly instantiating a robot class, the controller config can be directly passed into the constructor, e.g.:
import_fetch_controller.py
Modifying At Runtime
Robots' action spaces can also be modified at runtime after a robot has been imported, effectively re-loading a set of (potentially different) controllers. This is achieved by defining the new desired controller config and then calling reload_controllers()
:
reload_fetch_controllers.py
Customizing Observation Spaces
A robot is equipped with multiple onboard sensors, each of which can be configured to return a unique set of observations. Together, these observation modalities form the robot's observation space. For example, a Turtlebot robot consists of (a) a LIDAR (ScanSensor) at its base, (b) an RGB-D camera (VisionSensor) at its head, and (c) onboard proprioception. An example set of observations would be using modalities ["rgb", "normal", "proprio", "scan"]
, which would return RGB and surface normal maps, proprioception, and 2D radial LIDAR distances. Each of these modalities can be swapped out, depending on robot's set of equipped onboard sensors. Each of these controllers can be individual configured and swapped out for each robot. Please see the individual sensor classes for specific supported modalities.
Modifying Via Config
One way to customize a robot's set of observations is to manually set the desired sensor configuration in the environment config file when creating an OmniGibson environment. An example is shown below:
turtlebot_obs_cfg.yaml
In the above example, the observation modalities are specified via the obs_modalities
kwarg. Each type of sensor can be configured as well via the sensor_config
dictionary argument -- attributes such as image size and LIDAR range limits can be specified here. Specific proprioception values can be requested by setting the proprio_obs
kwarg, which by default will return all available proprioception values (and can be viewed via robot.default_proprio_obs
). Note that proprioception will only be used if proprio
is specified in obs_modalities
.
Alternatively, if directly instantiating a robot class, the observation modalities and sensor config can be directly passed into the constructor, e.g.:
import_turtlebot_sensor.py
Modifying At Runtime
In general, dynamically configuring a robot's set of observations at runtime is not supported. However, if a robot has either a ScanSensor
or VisionSensor
onboard, these sensors can have their set of active modalities be dynamically updated. This is achieved by directly calling add_modality()
or remove_modality()
on a specific sensor. An example is shown below: