Skip to content

🧱 Building Blocks

OmniGibson ships with many demo scripts highlighting its modularity and diverse feature set intended as a set of building blocks enabling your research. Let's try them out!


⚙️ A quick word about macros

Why macros?

Macros enforce global behavior that is consistent within an individual python process but can differ between processes. This is useful because globally enabling all of OmniGibson's features can cause unecessary slowdowns (1), and so configuring the macros for your specific use case can optimize performance.

  1. For example, Omniverse provides a so-called flatcache feature which provides significant performance boosts, but cannot be used when fluids or soft bodies are present. So, we ideally should always have gm.USE_FLATCACHE=True unless we have fluids or soft bodies in our environment.

macros define a globally available set of magic numbers or flags set throughout OmniGibson. These can either be directly set in omnigibson.macros.py, or can be programmatically modified at runtime via:

from omnigibson.macros import gm, macros

gm.<GLOBAL_MACRO> = <VALUE> # (1)!
macros.<OG_DIRECTORY>.<OG_MODULE>.<MODULE_MACRO> = <VALUE> # (2)!
  1. gm refers to the "global" macros -- i.e.: settings that generally impact the entire OmniGibson stack. These are usually the only settings you may need to modify.
  2. macros captures all remaining macros defined throughout OmniGibson's codebase -- these are often hardcoded default settings or magic numbers defined in a specific module. These can also be overridden, but we recommend inspecting the module first to understand how it is used.

Many of our examples set various macros settings at the beginning of the script, and is a good way to understand use cases for modifying them!


🌎 Environments

These examples showcase the full OmniGibson stack in use, and the types of environments immediately supported.

BEHAVIOR Task Demo

This demo is useful for...

  • Understanding how to instantiate a BEHAVIOR task
  • Understanding how a pre-defined configuration file is used
python -m omnigibson.examples.environments.behavior_env_demo

This demo instantiates one of our BEHAVIOR tasks (and optionally sampling object locations online) in a fully-populated scene and loads a Fetch robot. The robot executes random actions and the environment is reset periodically.

behavior_env_demo.py
import logging
import os

import yaml

import omnigibson as og
from omnigibson.macros import gm
from omnigibson.utils.ui_utils import choose_from_options


def main(random_selection=False, headless=False, short_exec=False):
    """
    Generates a BEHAVIOR Task environment from a pre-defined configuration file.

    It steps the environment 100 times with random actions sampled from the action space,
    using the Gym interface, resetting it 10 times.
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Ask the user whether they want online object sampling or not
    sampling_options = {
        False: "Use a pre-sampled cached BEHAVIOR activity scene",
        True: "Sample the BEHAVIOR activity in an online fashion",
    }
    should_sample = choose_from_options(options=sampling_options, name="online object sampling", random_selection=random_selection)

    # Load the pre-selected configuration and set the online_sampling flag
    config_filename = os.path.join(og.example_config_path, "fetch_behavior.yaml")
    cfg = yaml.load(open(config_filename, "r"), Loader=yaml.FullLoader)
    cfg["task"]["online_object_sampling"] = should_sample

    # If we're online sampling, make sure global contacts are enabled so we can accurately detect kinematic changes
    if should_sample:
        gm.ENABLE_GLOBAL_CONTACT_REPORTING = True

    # Load the environment
    env = og.Environment(configs=cfg)

    # Allow user to move camera more easily
    og.sim.enable_viewer_camera_teleoperation()

    # Run a simple loop and reset periodically
    max_iterations = 10 if not short_exec else 1
    for j in range(max_iterations):
        logging.info("Resetting environment")
        env.reset()
        for i in range(100):
            action = env.action_space.sample()
            state, reward, done, info = env.step(action)
            if done:
                logging.info("Episode finished after {} timesteps".format(i + 1))
                break

    # Always close the environment at the end
    env.close()


if __name__ == "__main__":
    main()

This demo is useful for...

  • Understanding how to instantiate a navigation task
  • Understanding how a pre-defined configuration file is used
python -m omnigibson.examples.environments.navigation_env_demo

This demo instantiates one of our navigation tasks in a fully-populated scene and loads a Turtlebot robot. The robot executes random actions and the environment is reset periodically.

navigation_env_demo.py
import logging
import os

import yaml

import omnigibson as og
from omnigibson.macros import gm
from omnigibson.utils.ui_utils import choose_from_options

# Use flatcache for performance speedups unless we're using GPU dynamics
if not gm.USE_GPU_DYNAMICS:
    gm.ENABLE_FLATCACHE = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Prompts the user to select a type of scene and loads a turtlebot into it, generating a Point-Goal navigation
    task within the environment.

    It steps the environment 100 times with random actions sampled from the action space,
    using the Gym interface, resetting it 10 times.
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Infer which config to load based on the scene selected
    scene_options = {
        "InteractiveTraversableScene": "Rs_int scene with fully interactive objects",
        "StaticTraversableScene": "Adrian scene mesh with no interactive objects",
    }
    scene_type = choose_from_options(options=scene_options, name="scene type", random_selection=random_selection)
    config_name = "turtlebot_nav" if scene_type == "InteractiveTraversableScene" else "turtlebot_static_nav"
    config_filename = os.path.join(og.example_config_path, f"{config_name}.yaml")
    config = yaml.load(open(config_filename, "r"), Loader=yaml.FullLoader)

    # If the scene type is interactive, also check if we want to quick load or full load the scene
    if scene_type == "InteractiveTraversableScene":
        load_options = {
            "Quick": "Only load the building assets (i.e.: the floors, walls, ceilings)",
            "Full": "Load all interactive objects in the scene",
        }
        load_mode = choose_from_options(options=load_options, name="load mode", random_selection=random_selection)
        if load_mode == "Quick":
            config["scene"]["load_object_categories"] = ["floors", "walls", "ceilings"]

    # Load the environment
    env = og.Environment(configs=config)

    # Allow user to move camera more easily
    og.sim.enable_viewer_camera_teleoperation()

    # Run a simple loop and reset periodically
    max_iterations = 10 if not short_exec else 1
    for j in range(max_iterations):
        logging.info("Resetting environment")
        env.reset()
        for i in range(100):
            action = env.action_space.sample()
            state, reward, done, info = env.step(action)
            if done:
                logging.info("Episode finished after {} timesteps".format(i + 1))
                break

    # Always close the environment at the end
    env.close()


if __name__ == "__main__":
    main()

Config Selection Demo

This demo is useful for...

  • Understanding how pre-defined configuration files are used
python -m omnigibson.examples.environments.config_selector

This demo allows you to choose one of our environment configuration files, loads the environment, and then cycles the environment periodically.

config_selector.py
import logging
import os

import yaml

import omnigibson as og
from omnigibson.utils.asset_utils import folder_is_hidden
from omnigibson.utils.ui_utils import choose_from_options


def main(random_selection=False, headless=False, short_exec=False):
    """
    Prompts the user to select any available interactive scene and loads a turtlebot into it.
    It steps the environment 100 times with random actions sampled from the action space,
    using the Gym interface, resetting it 10 times.
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Grab all configs and choose one to load
    og_config_path = og.example_config_path
    available_configs = sorted(
        [
            f
            for f in os.listdir(og_config_path)
            if (not folder_is_hidden(f) and os.path.isfile(os.path.join(og_config_path, f)))
        ]
    )
    config_id = choose_from_options(options=available_configs, name="config file", random_selection=random_selection)
    logging.info("Using config file " + config_id)
    config_filename = os.path.join(og.example_config_path, config_id)
    config = yaml.load(open(config_filename, "r"), Loader=yaml.FullLoader)

    load_options = {
        "Quick": "Only load the building assets (i.e.: the floors, walls, ceilings)",
        "Full": "Load all interactive objects in the scene",
    }
    load_mode = choose_from_options(options=load_options, name="load mode", random_selection=random_selection)
    if load_mode == "Quick":
        config["scene"]["load_object_categories"] = ["floors", "walls", "ceilings"]

    # Load the environment
    env = og.Environment(configs=config)

    max_iterations = 10 if not short_exec else 1
    for j in range(max_iterations):
        logging.info("Resetting environment")
        env.reset()
        for i in range(100):
            action = env.action_space.sample()
            for robot_name in action.keys():
                action[robot_name] = action[robot_name] * 0.05
            state, reward, done, info = env.step(action)
            if done:
                logging.info("Episode finished after {} timesteps".format(i + 1))
                break

    # Always close the environment at the end
    env.close()


if __name__ == "__main__":
    main()

🧑‍🏫 Learning

These examples showcase how OmniGibson can be used to train embodied AI agents.

Reinforcement Learning Demo

This demo is useful for...

  • Understanding how to hook up OmniGibson to an external algorithm
  • Understanding how to train and evaluate a policy
python -m omnigibson.examples.learning.stable_baselines3_ppo_example

This demo loads a BEHAVIOR task with a Fetch robot, and trains / evaluates the agent using Stable Baseline3's PPO algorithm.

stable_baselines3_ppo_example.py
"""
Example training code using stable-baselines3 PPO for one BEHAVIOR activity.
Note that due to the sparsity of the reward, this training code will not converge and achieve task success.
This only serves as a starting point that users can further build upon.
"""

import argparse
import logging
import os, time, cv2

import omnigibson as og
from omnigibson import example_config_path

log = logging.getLogger(__name__)

try:
    import gym
    import torch as th
    import torch.nn as nn
    from stable_baselines3 import PPO
    from stable_baselines3.common.evaluation import evaluate_policy
    from stable_baselines3.common.preprocessing import maybe_transpose
    from stable_baselines3.common.torch_layers import BaseFeaturesExtractor
    from stable_baselines3.common.utils import set_random_seed
    from stable_baselines3.common.callbacks import CallbackList, CheckpointCallback, EvalCallback

except ModuleNotFoundError:
    log.error("stable-baselines3 is not installed. You would need to do: pip install stable-baselines3")
    exit(1)


class CustomCombinedExtractor(BaseFeaturesExtractor):
    def __init__(self, observation_space: gym.spaces.Dict):
        # We do not know features-dim here before going over all the items,
        # so put something dummy for now. PyTorch requires calling
        super(CustomCombinedExtractor, self).__init__(observation_space, features_dim=1)
        self.debug_length = 10
        self.debug_mode = True
        extractors = {}
        self.step_index = 0
        self.img_save_dir = 'img_save_dir'
        os.makedirs(self.img_save_dir, exist_ok=True)
        total_concat_size = 0
        feature_size = 128
        for key, subspace in observation_space.spaces.items():
            if key in ["rgb", "ins_seg"]:
                print(subspace.shape)
                n_input_channels = subspace.shape[2]  # channel last
                cnn = nn.Sequential(
                    nn.Conv2d(n_input_channels, 4, kernel_size=8, stride=4, padding=0),
                    nn.ReLU(),
                    nn.MaxPool2d(2),
                    nn.Conv2d(4, 8, kernel_size=4, stride=2, padding=0),
                    nn.ReLU(),
                    nn.MaxPool2d(2),
                    nn.Conv2d(8, 4, kernel_size=3, stride=1, padding=0),
                    nn.ReLU(),
                    nn.Flatten(),
                )
                test_tensor = th.zeros([subspace.shape[2], subspace.shape[0], subspace.shape[1]])
                with th.no_grad():
                    n_flatten = cnn(test_tensor[None]).shape[1]
                fc = nn.Sequential(nn.Linear(n_flatten, feature_size), nn.ReLU())
                extractors[key] = nn.Sequential(cnn, fc)
            elif key in ["accum_reward", 'obj_joint']:
                extractors[key] = nn.Sequential(nn.Linear(subspace.shape[0], feature_size))
            else:
                continue
            total_concat_size += feature_size
        self.extractors = nn.ModuleDict(extractors)

        # Update the features dim manually
        self._features_dim = total_concat_size

    def forward(self, observations) -> th.Tensor:
        encoded_tensor_list = []
        self.step_index += 1

        # self.extractors contain nn.Modules that do all the processing.
        for key, extractor in self.extractors.items():
            if key in ["rgb",]:
                if self.debug_mode:
                    cv2.imwrite(os.path.join(self.img_save_dir, '{0:06d}.png'.format(self.step_index % self.debug_length)), cv2.cvtColor((observations[key][0].cpu().numpy()*255).astype('uint8'), cv2.COLOR_RGB2BGR))
                observations[key] = observations[key].permute((0, 3, 1, 2))
            elif key in ["ins_seg"]:
                observations[key] = observations[key].permute((0, 3, 1, 2)) / 500.
            elif key in ['accum_reward', 'obj_joint']:
                if len(observations[key].shape) == 3:
                    observations[key] = observations[key].squeeze(-1)  # [:, :, 0]
            else:
                continue

            encoded_tensor_list.append(extractor(observations[key]))

        feature = th.cat(encoded_tensor_list, dim=1)
        return feature


def main():
    # Parse args
    parser = argparse.ArgumentParser(description="Train or evaluate a PPO agent in BEHAVIOR")
    parser.add_argument(
        "--config",
        type=str,
        default=f"{example_config_path}/fetch_behavior.yaml",
        help="Absolute path to desired OmniGibson environment config to load",
    )

    parser.add_argument(
        "--checkpoint",
        type=str,
        default=None,
        help="Absolute path to desired PPO checkpoint to load for evaluation",
    )

    parser.add_argument(
        "--eval",
        action="store_true",
        help="If set, will evaluate the PPO agent found from --checkpoint",
    )

    args = parser.parse_args()
    tensorboard_log_dir = os.path.join("log_dir", time.strftime("%Y%m%d-%H%M%S"))
    os.makedirs(tensorboard_log_dir, exist_ok=True)
    prefix = ''
    seed = 0

    env = og.Environment(configs=args.config, action_timestep=1 / 60., physics_timestep=1 / 60.)

    # If we're evaluating, hide the ceilings and enable camera teleoperation so the user can easily
    # visualize the rollouts dynamically
    if args.eval:
        ceiling = env.scene.object_registry("name", "ceilings")
        ceiling.visible = False
        og.sim.enable_viewer_camera_teleoperation()

    # Set the set
    set_random_seed(seed)
    env.reset()

    policy_kwargs = dict(
        features_extractor_class=CustomCombinedExtractor,
    )

    os.makedirs(tensorboard_log_dir, exist_ok=True)

    if args.eval:
        assert args.checkpoint is not None, "If evaluating a PPO policy, @checkpoint argument must be specified!"
        model = PPO.load(args.checkpoint)
        print("Starting evaluation...")
        mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=50)
        print("Finished evaluation!")
        log.info(f"Mean reward: {mean_reward} +/- {std_reward:.2f}")

    else:
        model = PPO(
            "MultiInputPolicy",
            env,
            verbose=1,
            tensorboard_log=tensorboard_log_dir,
            policy_kwargs=policy_kwargs,
            n_steps=20 * 10,
            batch_size=8,
            device='cuda',
        )
        checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=tensorboard_log_dir, name_prefix=prefix)
        log.debug(model.policy)
        print(model)

        print("Starting training...")
        model.learn(total_timesteps=10000000, callback=checkpoint_callback,
                    eval_env=env, eval_freq=1000,
                    n_eval_episodes=20)
        print("Finished training!")


if __name__ == "__main__":
    main()

🏔️ Scenes

These examples showcase how to leverage OmniGibson's large-scale, diverse scenes shipped with the BEHAVIOR dataset.

Scene Selector Demo

This demo is useful for...

  • Understanding how to load a scene into OmniGibson
  • Accessing all BEHAVIOR dataset scenes
python -m omnigibson.examples.scenes.scene_selector

This demo lets you choose a scene from the BEHAVIOR dataset, loads it along with a Turtlebot robot, and cycles the resulting environment periodically.

scene_selector.py
import logging
import omnigibson as og
from omnigibson.macros import gm
from omnigibson.utils.asset_utils import get_available_g_scenes, get_available_og_scenes
from omnigibson.utils.ui_utils import choose_from_options

# Don't use GPU dynamics and Use flatcache for performance boost
gm.USE_GPU_DYNAMICS = False
gm.ENABLE_FLATCACHE = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Prompts the user to select any available non-interactive scene and loads a turtlebot into it.
    It steps the environment 100 times with random actions sampled from the action space,
    using the Gym interface, resetting it 10 times.
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Choose the scene type to load
    scene_options = {
        "InteractiveTraversableScene": "Procedurally generated scene with fully interactive objects",
        "StaticTraversableScene": "Monolithic scene mesh with no interactive objects",
    }
    scene_type = choose_from_options(options=scene_options, name="scene type", random_selection=random_selection)

    # Choose the scene model to load
    scenes = get_available_og_scenes() if scene_type == "InteractiveTraversableScene" else get_available_g_scenes()
    scene_model = choose_from_options(options=scenes, name="scene model", random_selection=random_selection)
    print(f"scene model: {scene_model}")

    cfg = {
        "scene": {
            "type": scene_type,
            "scene_model": scene_model,
        },
        "robots": [
            {
                "type": "Turtlebot",
                "obs_modalities": ["scan", "rgb", "depth"],
                "action_type": "continuous",
                "action_normalize": True,
            }
        ],
    }

    # If the scene type is interactive, also check if we want to quick load or full load the scene
    if scene_type == "InteractiveTraversableScene":
        load_options = {
            "Quick": "Only load the building assets (i.e.: the floors, walls, ceilings)",
            "Full": "Load all interactive objects in the scene",
        }
        load_mode = choose_from_options(options=load_options, name="load mode", random_selection=random_selection)
        if load_mode == "Quick":
            cfg["scene"]["load_object_categories"] = ["floors", "walls", "ceilings"]

    # Load the environment
    env = og.Environment(configs=cfg)

    # Allow user to move camera more easily
    if not headless:
        og.sim.enable_viewer_camera_teleoperation()

    # Run a simple loop and reset periodically
    max_iterations = 10 if not short_exec else 1
    for j in range(max_iterations):
        logging.info("Resetting environment")
        env.reset()
        for i in range(100):
            action = env.action_space.sample()
            state, reward, done, info = env.step(action)
            if done:
                logging.info("Episode finished after {} timesteps".format(i + 1))
                break

    # Always close the environment at the end
    env.close()


if __name__ == "__main__":
    main()

Traversability Map Demo

This demo is useful for...

  • Understanding how to leverage traversability map information from BEHAVIOR dataset scenes
python -m omnigibson.examples.scenes.traversability_map_example

This demo lets you choose a scene from the BEHAVIOR dataset, and generates its corresponding traversability map.

traversability_map_example.py
import logging
import os

import cv2
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image

from omnigibson.utils.asset_utils import get_og_scene_path, get_available_og_scenes
from omnigibson.utils.ui_utils import choose_from_options


def main(random_selection=False, headless=False, short_exec=False):
    """
    Traversable map demo
    Loads the floor plan and obstacles for the requested scene, and overlays them in a visual figure such that the
    highlighted area reflects the traversable (free-space) area
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    scenes = get_available_og_scenes()
    scene_model = choose_from_options(options=scenes, name="scene model", random_selection=random_selection)
    print(f"Generating traversability map for scene {scene_model}")

    trav_map_size = 200
    trav_map_erosion = 2

    trav_map = Image.open(os.path.join(get_og_scene_path(scene_model), "layout", "floor_trav_0.png"))
    trav_map = np.array(trav_map.resize((trav_map_size, trav_map_size)))
    trav_map = cv2.erode(trav_map, np.ones((trav_map_erosion, trav_map_erosion)))

    if not headless:
        plt.figure(figsize=(12, 12))
        plt.imshow(trav_map)
        plt.title(f"Traversable area of {scene_model} scene")

    if not headless:
        plt.show()


if __name__ == "__main__":
    main()

🎾 Objects

These examples showcase how to leverage objects in OmniGibson.

Load Object Demo

This demo is useful for...

  • Understanding how to load an object into OmniGibson
  • Accessing all BEHAVIOR dataset asset categories and models
python -m omnigibson.examples.objects.load_object_selector

This demo lets you choose a specific object from the BEHAVIOR dataset, and loads the requested object into an environment.

load_object_selector.py
import logging
import numpy as np
from collections import OrderedDict
import omnigibson as og
from omnigibson.utils.asset_utils import (
    get_all_object_categories,
    get_og_avg_category_specs,
    get_object_models_of_category,
)
from omnigibson.utils.ui_utils import choose_from_options


def main(random_selection=False, headless=False, short_exec=False):
    """
    This demo shows how to load any scaled objects from the iG object model dataset
    The user selects an object model to load
    The objects can be loaded into an empty scene, an interactive scene (iG) or a static scene (Gibson)
    The example also shows how to use the Environment API or directly the Simulator API, loading objects and robots
    and executing actions
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)
    scene_options = ["Scene", "InteractiveTraversableScene", "StaticTraversableScene"]
    scene_type = choose_from_options(options=scene_options, name="scene type", random_selection=random_selection)

    # -- Choose the object to load --

    # Select a category to load
    available_obj_categories = get_all_object_categories()
    obj_category = choose_from_options(options=available_obj_categories, name="object category", random_selection=random_selection)

    # Select a model to load
    available_obj_models = get_object_models_of_category(obj_category)
    obj_model = choose_from_options(options=available_obj_models, name="object model", random_selection=random_selection)

    # Load the specs of the object categories, e.g., common scaling factor
    avg_category_spec = get_og_avg_category_specs()

    # Create and load this object into the simulator
    obj_cfg = OrderedDict(
        type="DatasetObject",
        name="obj",
        category=obj_category,
        model=obj_model,
        bounding_box=avg_category_spec.get(obj_category),
        fit_avg_dim_volume=True,
        position=[0.5, -0.5, 1.01],
    )

    cfg = {
        "scene": {
            "type": scene_type,
        },
        "objects": [obj_cfg],
    }
    if scene_type == "InteractiveTraversableScene":
        cfg["scene"]["scene_model"] = "Rs_int"
    elif scene_type == "StaticTraversableScene":
        cfg["scene"]["scene_model"] = "Adrian"

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1 / 60., physics_timestep=1 / 60.)

    # Step through the environment
    max_steps = 100 if short_exec else 10000
    for i in range(max_steps):
        env.step(np.array([]))

    # Always close the environment at the end
    env.close()


if __name__ == "__main__":
    main()

Object Visualizer Demo

This demo is useful for...

  • Viewing objects' textures as rendered in OmniGibson
  • Viewing articulated objects' range of motion
  • Understanding how to reference object instances from the environment
  • Understanding how to set object poses and joint states
python -m omnigibson.examples.objects.visualize_object

This demo lets you choose a specific object from the BEHAVIOR dataset, and rotates the object in-place. If the object is articulated, it additionally moves its joints through its full range of motion.

visualize_object.py
import argparse
import logging
from collections import OrderedDict
import numpy as np

import omnigibson as og
from omnigibson.utils.asset_utils import (
    get_all_object_categories,
    get_object_models_of_category,
)
from omnigibson.utils.ui_utils import choose_from_options
import omnigibson.utils.transform_utils as T


def main(random_selection=False, headless=False, short_exec=False):
    """
    Visualizes object as specified by its USD path, @usd_path. If None if specified, will instead
    result in an object selection from OmniGibson's object dataset
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Assuming that if random_selection=True, headless=True, short_exec=True, we are calling it from tests and we
    # do not want to parse args (it would fail because the calling function is pytest "testfile.py")
    usd_path = None
    if not (random_selection and headless and short_exec):
        parser = argparse.ArgumentParser()
        parser.add_argument(
            "--usd_path",
            default=None,
            help="USD Model to load",
        )
        args = parser.parse_args()
        usd_path = args.usd_path

    # Define objects to load
    light0_cfg = OrderedDict(
        type="LightObject",
        light_type="Sphere",
        name="sphere_light0",
        radius=0.01,
        intensity=1e5,
        position=[-2.0, -2.0, 2.0],
    )

    light1_cfg = OrderedDict(
        type="LightObject",
        light_type="Sphere",
        name="sphere_light1",
        radius=0.01,
        intensity=1e5,
        position=[-2.0, 2.0, 2.0],
    )

    # Make sure we have a valid usd path
    if usd_path is None:
        # Select a category to load
        available_obj_categories = get_all_object_categories()
        obj_category = choose_from_options(options=available_obj_categories, name="object category",
                                           random_selection=random_selection)

        # Select a model to load
        available_obj_models = get_object_models_of_category(obj_category)
        obj_model = choose_from_options(options=available_obj_models, name="object model",
                                        random_selection=random_selection)

        kwargs = {
            "type": "DatasetObject",
            "category": obj_category,
            "model": obj_model,
        }
    else:
        kwargs = {
            "type": "USDObject",
            "usd_path": usd_path,
        }

    # Import the desired object
    obj_cfg = OrderedDict(
        **kwargs,
        name="obj",
        usd_path=usd_path,
        visual_only=True,
        position=[0, 0, 10.0],
    )

    # Create the scene config to load -- empty scene
    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": [light0_cfg, light1_cfg, obj_cfg],
    }

    # Create the environment
    env = og.Environment(configs=cfg)

    # Set camera to appropriate viewing pose
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([-0.00913503, -1.95750906,  1.36407314]),
        orientation=np.array([0.6350064 , 0.        , 0.        , 0.77250687]),
    )

    # Grab the object references
    obj = env.scene.object_registry("name", "obj")

    # Standardize the scale of the object so it fits in a [1,1,1] box -- note that we have to stop the simulator
    # in order to set the scale
    extents = obj.aabb_extent
    og.sim.stop()
    obj.scale = (np.ones(3) / extents).min()
    og.sim.play()
    env.step(np.array([]))

    # Move the object so that its center is at [0, 0, 1]
    center_offset = obj.aabb_center - obj.get_position() + np.array([0, 0, 1.0])
    obj.set_position(center_offset)

    # Allow the user to easily move the camera around
    og.sim.enable_viewer_camera_teleoperation()

    # Rotate the object in place
    steps_per_rotate = 360
    steps_per_joint = steps_per_rotate / 10
    max_steps = 100 if short_exec else 10000
    for i in range(max_steps):
        z_angle = (2 * np.pi * (i % steps_per_rotate) / steps_per_rotate)
        quat = T.euler2quat(np.array([0, 0, z_angle]))
        pos = T.quat2mat(quat) @ center_offset
        if obj.n_dof > 0:
            frac = (i % steps_per_joint) / steps_per_joint
            j_frac = -1.0 + 2.0 * frac if (i // steps_per_joint) % 2 == 0 else 1.0 - 2.0 * frac
            obj.set_joint_positions(positions=j_frac * np.ones(obj.n_dof), normalized=True, target=False)
            obj.keep_still()
        obj.set_position_orientation(position=pos, orientation=quat)
        env.step(np.array([]))


if __name__ == "__main__":
    main()

Highlight Object

This demo is useful for...

  • Understanding how to highlight individual objects within a cluttered scene
  • Understanding how to access groups of objects from the environment
python -m omnigibson.examples.objects.highlight_objects

This demo lets you choose a specific object from the BEHAVIOR dataset, and rotates the object in-place. If the object is articulated, it additionally moves its joints through its full range of motion.

highlight_objects.py
"""
Generate example top-down segmentation map via renderer
"""
import logging
import numpy as np
import omnigibson as og


def main(random_selection=False, headless=False, short_exec=False):
    """
    Highlights visually all object instances of some given category and then removes the highlighting
    It also demonstrates how to apply an action on all instances of objects of a given category
    ONLY WORKS WITH OPTIMIZED RENDERING (not on Mac)
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Create the scene config to load -- empty scene
    cfg = {
        "scene": {
            "type": "InteractiveTraversableScene",
            "scene_model": "Rs_int",
        }
    }

    # Create the environment
    env = og.Environment(configs=cfg)

    # Grab all window objects
    windows = og.sim.scene.object_registry("category", "window")

    # Step environment while toggling window highlighting
    i = 0
    highlighted = False
    max_steps = -1 if not short_exec else 1000
    while i != max_steps:
        env.step(np.array([]))
        if i % 50 == 0:
            highlighted = not highlighted
            logging.info(f"Toggling window highlight to: {highlighted}")
            for window in windows:
                # Note that this property is R/W!
                window.highlighted = highlighted
        i += 1

    # Always close the environment at the end
    env.close()


if __name__ == "__main__":
    main()

Draw Object Bounding Box Demo

This demo is useful for...

  • Understanding how to access observations from a GymObservable object (1)
  • Understanding how to access objects' bounding box information
  • Understanding how to dynamically modify vision modalities
  1. Environment, all sensors extending from BaseSensor, and all objects extending from BaseObject (which includes all robots extending from BaseRobot!) are GymObservable objects!
python -m omnigibson.examples.objects.draw_bounding_box

This demo loads a door object and banana object, and partially obscures the banana with the door. It generates both "loose" and "tight" bounding boxes (where the latter respects occlusions) for both objects, and dumps them to an image on disk.

draw_bounding_box.py
import logging
import matplotlib.pyplot as plt
from collections import OrderedDict

import numpy as np
import omnigibson as og
from omni.isaac.synthetic_utils.visualization import colorize_bboxes


def main(random_selection=False, headless=False, short_exec=False):
    """
    Shows how to obtain the bounding box of an articulated object.
    Draws the bounding box around the loaded object, a cabinet, and writes the visualized image to disk at the
    current directory named 'bbox_2d_[loose / tight]_img.png'.

    NOTE: In the GUI, bounding boxes can be natively viewed by clicking on the sensor ((*)) icon at the top,
    and then selecting the appropriate bounding box modalities, and clicking "Show". See:

    https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_replicator/visualization.html#the-visualizer
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Specify objects to load
    banana_cfg = OrderedDict(
        type="DatasetObject",
        name="banana",
        category="banana",
        model="09_0",
        scale=[3.0, 5.0, 2.0],
        position=[-0.906661, -0.545106,  0.136824],
        orientation=[0, 0, 0.76040583, -0.6494482 ],
    )

    door_cfg = OrderedDict(
        type="DatasetObject",
        name="door",
        category="door",
        model="8930",
        position=[-2.0, 0, 0.70000001],
        orientation=[0, 0, -0.38268343,  0.92387953],
    )

    # Create the scene config to load -- empty scene with a few objects
    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": [banana_cfg, door_cfg],
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Set camera to appropriate viewing pose
    cam = og.sim.viewer_camera
    cam.set_position_orientation(
        position=np.array([-4.62785 , -0.418575,  0.933943]),
        orientation=np.array([ 0.52196595, -0.4231939 , -0.46640436,  0.5752612 ]),
    )

    # Add bounding boxes to camera sensor
    bbox_modalities = ["bbox_3d", "bbox_2d_loose", "bbox_2d_tight"]
    for bbox_modality in bbox_modalities:
        cam.add_modality(bbox_modality)

    # Take a few steps to let objects settle
    for i in range(100):
        env.step(np.array([]))

    # Grab observations from viewer camera and write them to disk
    obs = cam.get_obs()

    for bbox_modality in bbox_modalities:
        # Print out each of the modalities
        print(f"Observation modality {bbox_modality}:")
        print(obs[bbox_modality])

        # Also write the 2d loose bounding box to disk
        if "3d" not in bbox_modality:
            colorized_img = colorize_bboxes(bboxes_2d_data=obs[bbox_modality], bboxes_2d_rgb=obs["rgb"], num_channels=4)
            plt.imsave(f"{bbox_modality}_img.png", colorized_img)

    # Always close environment down at end
    env.close()


if __name__ == "__main__":
    main()

🌡️ Object States

These examples showcase OmniGibson's powerful object states functionality, which captures both individual and relational kinematic and non-kinematic states.

Attachment Demo

This demo is useful for...

  • Understanding how to leverage the Attached state
  • Understanding how to enable objects to be attachable
python -m omnigibson.examples.object_states.attachment_demo

This demo loads an apple and a fridge, and showcases how they may or may not be attached upon contact based on their assigned attachment types.

attachment_demo.py
import numpy as np
import yaml
from collections import OrderedDict

import omnigibson as og
from omnigibson.macros import gm
from omnigibson.utils.ui_utils import choose_from_options
from omnigibson.object_states.attachment import AttachmentType
from omnigibson.object_states import Attached

# Make sure object states are enabled and global contact reporting is enabled
gm.ENABLE_OBJECT_STATES = True
gm.ENABLE_GLOBAL_CONTACT_REPORTING = True


def setup_scene_for_abilities(abilities1, abilities2):
    # Make sure simulation is stopped
    og.sim.stop()

    cfg = yaml.load(open(f"{og.example_config_path}/default_cfg.yaml", "r"), Loader=yaml.FullLoader)

    # Add objects that we want to create
    light_cfg = OrderedDict(
        type="LightObject",
        name="light",
        light_type="Sphere",
        radius=0.01,
        intensity=5000,
        position=[0, 0, 1.0],
    )

    apple_cfg = OrderedDict(
        type="DatasetObject",
        name="apple",
        category="apple",
        model="00_0",
        abilities=abilities1,
        position=[0, 0, 0.04],
    )

    fridge_cfg = OrderedDict(
        type="DatasetObject",
        name="fridge",
        category="fridge",
        model="12252",
        abilities=abilities2,
        position=[2, 0, 0.8],
    )

    cfg["objects"] = [light_cfg, apple_cfg, fridge_cfg]

    # Recreate the environment (this will automatically override the old environment instance)
    # We load the default config, which is simply an Scene with no objects loaded in by default
    env = og.Environment(configs=cfg)

    # Grab apple and fridge
    apple = env.scene.object_registry("name", "apple")
    fridge = env.scene.object_registry("name", "fridge")

    # Set viewer camera pose
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([-0.972333, -2.0899  ,  1.0654  ]),
        orientation=np.array([ 0.60682517, -0.24656188, -0.28443909,  0.70004632]),
    )

    # Take a few steps
    for _ in range(5):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)

    return env, apple, fridge


def demo_sticky_attachment():
    ######################################################################################
    # Sticky attachment
    #   can attach if touching and at least one object has the sticky attachment type.
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.STICKY}},
        abilities2={})
    assert Attached in obj1.states

    # Obj1 moves towards obj2 and they are attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        og.sim.step()

    assert obj1.states[Attached].get_value(obj2)

    # Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.
    obj1.set_linear_velocity(velocity=np.array([10.0, 0, 50.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)

    # Unattach obj1 and obj2.
    obj1.states[Attached].set_value(obj2, False)
    assert not obj1.states[Attached].get_value(obj2)

    # Obj1 moves away from obj2.
    # obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)


def demo_symmetric_attachment():
    ######################################################################################
    # Symmetric attachment
    #   can attach if touching and both objects have the symmetric attachment type and the same attachment category
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.SYMMETRIC, "attachment_category": "magnet"}},
        abilities2={"attachable": {"attachment_type": AttachmentType.SYMMETRIC, "attachment_category": "magnet"}})
    assert Attached in obj1.states
    assert Attached in obj2.states

    # Obj1 moves towards obj2 and they are attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)
    assert obj1.states[Attached].get_value(obj2)
    assert obj2.states[Attached].get_value(obj1)

    # Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.
    obj1.set_linear_velocity(velocity=np.array([10.0, 0, 50.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)

    # Unattach obj1 and obj2.
    obj1.states[Attached].set_value(obj2, False)
    assert not obj1.states[Attached].get_value(obj2)
    assert not obj2.states[Attached].get_value(obj1)

    # Obj1 moves away from obj2.
    obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)


def demo_failed_symmetric_attachment_missing_symmetric():
    ######################################################################################
    # Symmetric attachment - FAIL because only 1 object has the symmetric attachment type
    #   can attach if touching and both objects have the symmetric attachment type and the same attachment category
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.SYMMETRIC, "attachment_category": "magnet"}},
        abilities2={})
    assert Attached in obj1.states
    assert Attached not in obj2.states

    # Obj1 moves towards obj2 but they are NOT attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)
    assert not obj1.states[Attached].get_value(obj2)

    # Obj1 moves away from obj2.
    obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)


def demo_failed_symmetric_attachment_diff_categories():
    ######################################################################################
    # Symmetric attachment - FAIL because the two objects have different attachment category
    #   can attach if touching and both objects have the symmetric attachment type and the same attachment category
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.SYMMETRIC, "attachment_category": "magnet"}},
        abilities2={"attachable": {"attachment_type": AttachmentType.SYMMETRIC, "attachment_category": "velcro"}})
    assert Attached in obj1.states
    assert Attached in obj2.states

    # Obj1 moves towards obj2 but they are NOT attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)
    assert not obj1.states[Attached].get_value(obj2)

    # Obj1 moves away from obj2.
    obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)


def demo_male_female_attachment():
    ######################################################################################
    # Male / female attachment
    #   can attach if touching, both have the opposite end (male / female) but the same attachment category
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.MALE, "attachment_category": "usb"}},
        abilities2={"attachable": {"attachment_type": AttachmentType.FEMALE, "attachment_category": "usb"}})
    assert Attached in obj1.states
    assert Attached in obj2.states

    # Obj1 moves towards obj2 and they are attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)
    assert obj1.states[Attached].get_value(obj2)
    assert obj2.states[Attached].get_value(obj1)

    # Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.
    obj1.set_linear_velocity(velocity=np.array([10.0, 0, 50.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)

    # Unattach obj1 and obj2.
    obj1.states[Attached].set_value(obj2, False)
    assert not obj1.states[Attached].get_value(obj2)
    assert not obj2.states[Attached].get_value(obj1)

    # Obj1 moves away from obj2.
    obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)


def demo_failed_male_female_attachment_missing_opposite_end():
    ######################################################################################
    # Male / female attachment - FAIL because both objects are male.
    #   can attach if touching, both have the opposite end (male / female) but the same attachment category
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.MALE, "attachment_category": "usb"}},
        abilities2={"attachable": {"attachment_type": AttachmentType.MALE, "attachment_category": "usb"}})
    assert Attached in obj1.states
    assert Attached in obj2.states

    # Obj1 moves towards obj2 and but they are NOT attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)
    assert not obj1.states[Attached].get_value(obj2)

    # Obj1 moves away from obj2.
    obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)


def demo_failed_male_female_attachment_diff_categories():
    ######################################################################################
    # Male / female attachment - FAIL because the two objects have different attachment category
    #   can attach if touching, both have the opposite end (male / female) but the same attachment category
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.MALE, "attachment_category": "usb"}},
        abilities2={"attachable": {"attachment_type": AttachmentType.FEMALE, "attachment_category": "hdmi"}})
    assert Attached in obj1.states
    assert Attached in obj2.states

    # Obj1 moves towards obj2 and but they are NOT attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)
    assert not obj1.states[Attached].get_value(obj2)

    # Obj1 moves away from obj2.
    obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)


def demo_male_female_attachment_dump_load():
    ######################################################################################
    # Male / female attachment with dump_state and load_state
    #   can attach if touching, both have the opposite end (male / female) but the same attachment category
    ######################################################################################
    env, obj1, obj2 = setup_scene_for_abilities(
        abilities1={"attachable": {"attachment_type": AttachmentType.MALE, "attachment_category": "usb"}},
        abilities2={"attachable": {"attachment_type": AttachmentType.FEMALE, "attachment_category": "usb"}})
    assert Attached in obj1.states
    assert Attached in obj2.states

    # Obj1 moves towards obj2 and they are attached together.
    obj1.set_linear_velocity(velocity=np.array([3.0, 0, 3.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)
    assert obj1.states[Attached].get_value(obj2)
    assert obj2.states[Attached].get_value(obj1)

    # Save the state.
    state = og.sim.dump_state()

    # Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.
    obj1.set_linear_velocity(velocity=np.array([10.0, 0, 50.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)

    # Unattach obj1 and obj2.
    obj1.states[Attached].set_value(obj2, False)
    assert not obj1.states[Attached].get_value(obj2)
    assert not obj2.states[Attached].get_value(obj1)

    # Obj1 moves away from obj2.
    obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)

    # Load the state where the two objects are attached.
    og.sim.load_state(state)

    # Attached state should be restored correctly
    assert obj1.states[Attached].get_value(obj2)
    assert obj2.states[Attached].get_value(obj1)

    # Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.
    obj1.set_linear_velocity(velocity=np.array([10.0, 0, 50.0]))
    for i in range(100):
        env.step(np.array([]))  # empty action array since action space is 0 (no robots in the env)

demo_names_to_demos = {
    "demo_sticky_attachment": demo_sticky_attachment,
    "demo_symmetric_attachment": demo_symmetric_attachment,
    "demo_failed_symmetric_attachment_missing_symmetric": demo_failed_symmetric_attachment_missing_symmetric,
    "demo_failed_symmetric_attachment_diff_categories": demo_failed_symmetric_attachment_diff_categories,
    "demo_male_female_attachment": demo_male_female_attachment,
    "demo_failed_male_female_attachment_missing_opposite_end": demo_failed_male_female_attachment_missing_opposite_end,
    "demo_failed_male_female_attachment_diff_categories": demo_failed_male_female_attachment_diff_categories,
    "demo_male_female_attachment_dump_load": demo_male_female_attachment_dump_load,
}


def main(random_selection=False, headless=False, short_exec=False):
    # Loop indefinitely and choose different examples to run
    for i in range(len(demo_names_to_demos)):
        demo_name = choose_from_options(options=list(demo_names_to_demos.keys()), name="attachment demo")
        # Run the demo
        demo_names_to_demos[demo_name]()


if __name__ == "__main__":
    main()

Cleaning Demo

This demo is useful for...

  • Understanding how WaterSource-enabled objects can be toggled on to generate water particles
  • Understanding how to programmatically generate Stain and Dusty particles on objects
  • Understanding how particles can be removed via a ParticleRemover object
python -m omnigibson.examples.object_states.cleaning_demo

This demo first loads a basic cube object with the ability to remove particles into fully populated scene, and then spawns dust and stain particles on various objects and turns on the sink. You can then move (1) the cube object around to first absorb water particles so that it's "soaked" with water, and then can drag it across objects with dust or stain particles to remove them.

  1. Manipulate the object by holding down Shift and then Left-click + Drag!
cleaning_demo.py
import logging
import numpy as np

import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
from omnigibson.systems import DustSystem, StainSystem, WaterSystem
from omnigibson.utils.constants import ParticleModifyMethod

# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Demo of a cleaning task
    Loads an interactive scene and sets all object surface to be dirty
    Loads also a cleaning tool that can be soaked in water and used to clean objects if moved manually
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Create the scene config to load -- Rs_int with only a few object categories loaded, as
    # well as a custom block object that will be used as a cleaning tool
    def check_water_saturation(obj):
        return obj.states[object_states.Saturated].get_value(WaterSystem)

    cfg = {
        "scene": {
            "type": "InteractiveTraversableScene",
            "scene_model": "Rs_int",
            "load_object_categories": ["floors", "walls", "ceilings", "breakfast_table", "bottom_cabinet", "sink", "stove", "fridge", "window"],
        },
        "objects": [
            # A cleaning tool (cuboid) with the ability to be saturated and remove stain, dust, and water particles
            {
                "type": "PrimitiveObject",
                "name": "block",
                "primitive_type": "Cube",
                "scale": [0.15, 0.1, 0.03],
                "rgba": [0.5, 1.0, 1.0, 1.0],
                "abilities": {
                    "saturable": {},
                    "particleRemover": {
                        "method": ParticleModifyMethod.ADJACENCY,
                        "conditions": {
                            # For a specific particle system, this specifies what conditions are required in order for the
                            # particle applier / remover to apply / remover particles associated with that system
                            # The list should contain functions with signature condition() --> bool,
                            # where True means the condition is satisfied
                            # In this case, we only allow our cleaning tool to remove stains and dust particles if
                            # the object is saturated with water, i.e.: it's "soaked" with water particles
                            StainSystem: [check_water_saturation],
                            DustSystem: [check_water_saturation],
                            WaterSystem: [],
                        },
                    },
                },
                "position": [-1.4, 3.0, 1.5],
            },
        ],
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Set everything that can go dirty and activate the water sources
    dusty_objects = env.scene.object_registry("category", "breakfast_table")
    stained_objects = env.scene.object_registry("category", "bottom_cabinet")
    water_source_objects = env.scene.get_objects_with_state(object_states.WaterSource)

    for obj in dusty_objects:
        logging.info(f"Setting object {obj.name} to be Dusty")
        obj.states[object_states.Covered].set_value(DustSystem, True)

    for obj in stained_objects:
        logging.info(f"Setting object {obj.name} to be Stained")
        obj.states[object_states.Covered].set_value(StainSystem, True)

    for obj in water_source_objects:
        if object_states.ToggledOn in obj.states:
            logging.info(f"Setting water source object {obj} to be ToggledOn")
            obj.states[object_states.ToggledOn].set_value(True)

    # Set the camera to be in a good position
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([-0.825556,  2.42499 ,  1.04104 ]),
        orientation=np.array([0.56919735, 0.09896035, 0.13981109, 0.80416049]),
    )

    max_steps = -1 if not short_exec else 1000
    step = 0
    try:
        for i in range(200):
            env.step(np.array([]))
        while step != max_steps:
            env.step(np.array([]))      # Empty action since no robots in the environment
            step += 1
    finally:
        # Always close environment at the end
        env.close()


if __name__ == "__main__":
    main()

Simple Cleaning Demo

This demo is useful for...

  • Understanding how WaterSource-enabled objects can be toggled on to generate water particles
  • Understanding how to programmatically generate Stain and Dusty particles on objects
  • Understanding how particles can be removed via a ParticleRemover object
python -m omnigibson.examples.object_states.cleaning_demo_simple

This demo is similar in functionality to the full cleaning demo, but instead of loading a fully populated scene only a select few objects are loaded. In this case, the loaded scrub_brush object is the ParticleRemover object. As before, you can move (1) the brush around to first absorb water particles so that it's "soaked" with water, and then can drag it across objects with dust or stain particles to remove them.

  1. Manipulate the object by holding down Shift and then Left-click + Drag!
cleaning_demo_simple.py
import logging
from collections import OrderedDict
import numpy as np

import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
from omnigibson.systems import DustSystem, StainSystem, WaterSystem
from omnigibson.utils.constants import ParticleModifyMethod

# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Demo of a cleaning task that resets after everything has been cleaned
    Loads an empty scene with a sink, a dusty table and a dirty and stained bowl, and a cleaning tool
    If everything is cleaned, or after N steps, the scene resets to the initial state
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Create the scene config to load -- empty scene
    cfg = {
        "scene": {
            "type": "Scene",
        }
    }

    # Define objects to load into the environment
    sink_cfg = OrderedDict(
        type="DatasetObject",
        name="sink",
        category="sink",
        model="sink_1",
        scale=[0.8, 0.8, 0.8],
        abilities={"toggleable": {}, "waterSource": {}, "waterSink": {}},
        position=[-0.7, 0, 0.53],
    )

    def check_water_saturation(obj):
        return obj.states[object_states.Saturated].get_value(WaterSystem)

    brush_cfg = OrderedDict(
        type="DatasetObject",
        name="brush",
        category="scrub_brush",
        model="scrub_brush_000",
        avg_obj_dims={"size": [0.1, 0.1, 0.1], "density": 67.0},
        fit_avg_dim_volume=True,
        position=[1.0, 0, 0.4],
        abilities={
            "saturable": {},
            "particleRemover": {
                "method": ParticleModifyMethod.ADJACENCY,
                "conditions": {
                    # For a specific particle system, this specifies what conditions are required in order for the
                    # particle applier / remover to apply / remover particles associated with that system
                    # The list should contain functions with signature condition() --> bool,
                    # where True means the condition is satisfied
                    # In this case, we only allow our cleaning tool to remove stains and dust particles if
                    # the object is saturated with water, i.e.: it's "soaked" with water particles
                    StainSystem: [check_water_saturation],
                    DustSystem: [check_water_saturation],
                    WaterSystem: [],
                },
            },
        },
    )

    # Desk that's dusty
    desk_cfg = OrderedDict(
        type="DatasetObject",
        name="desk",
        category="breakfast_table",
        model="19203",
        scale=[0.8, 0.8, 0.8],
        position=[1.0, 0, 0.48],
    )

    # Bowl with stains
    bowl_cfg = OrderedDict(
        type="DatasetObject",
        name="bowl",
        category="bowl",
        model="68_0",
        scale=np.array([0.8, 0.8, 0.8]),
        position=[-1.0, 0, 0.48],
    )

    cfg["objects"] = [sink_cfg, brush_cfg, desk_cfg, bowl_cfg]

    # Create the environment!
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Set camera to ideal angle for viewing objects
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([-0.782289, -0.633009,  1.4475  ]),
        orientation=np.array([ 0.48871723, -0.24618907, -0.37654978,  0.74750028]),
    )

    # Take a few steps to let the objects settle, and then sanity check the initial state
    for _ in range(10):
        env.step(np.array([]))              # Empty action since no robots are in the scene

    sink = env.scene.object_registry("name", "sink")
    brush = env.scene.object_registry("name", "brush")
    desk = env.scene.object_registry("name", "desk")
    bowl = env.scene.object_registry("name", "bowl")

    assert sink.states[object_states.ToggledOn].set_value(True)
    assert desk.states[object_states.Covered].set_value(DustSystem, True)
    assert bowl.states[object_states.OnTop].set_value(desk, True, use_ray_casting_method=True)
    assert brush.states[object_states.OnTop].set_value(desk, True, use_ray_casting_method=True)
    assert bowl.states[object_states.Covered].set_value(StainSystem, True)

    # Take a step, and save the state
    env.step(np.array([]))
    initial_state = og.sim.dump_state()

    # Main simulation loop.
    max_steps = 1000
    max_iterations = -1 if not short_exec else 1
    iteration = 0
    try:
        while iteration != max_iterations:
            # Keep stepping until table or bowl are clean, or we reach 1000 steps
            steps = 0
            while (
                desk.states[object_states.Covered].get_value(DustSystem)
                and bowl.states[object_states.Covered].get_value(StainSystem)
                and steps != max_steps
            ):
                steps += 1
                env.step(np.array([]))
                logging.info(f"Step {steps}")

            if not desk.states[object_states.Covered].get_value(DustSystem):
                logging.info("Reset because Table cleaned")
            elif not bowl.states[object_states.Covered].get_value(StainSystem):
                logging.info("Reset because Bowl cleaned")
            else:
                logging.info("Reset because max steps")

            # Reset to the initial state
            og.sim.load_state(initial_state)

            iteration += 1

    finally:
        # Always shut down environment at the end
        env.close()


if __name__ == "__main__":
    main()

Folded Demo

This demo is useful for...

  • Understanding how to load a softbody (cloth) version of a BEHAVIOR dataset object
  • Understanding how to enable cloth objects to be foldable
  • Understanding the current heuristics used for gauging a cloth's "foldness"

This is a beta feature

Our Folded state is still a WIP, and should not be used to accurately capture the full semantics of folding cloth-enabled objects.

python -m omnigibson.examples.object_states.folded_state_demo

This demo loads in three different cloth objects, and allows you to manipulate them (1) while printing out their Folded state status in real-time.

  1. Manipulate the object by holding down Shift and then Left-click + Drag!
folded_state_demo.py
from omnigibson.utils.constants import PrimType
from omnigibson.object_states import Folded
from omnigibson.macros import gm
import logging

import omnigibson as og

# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for cloth)
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Demo of cloth objects that can potentially be folded.
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Create the scene config to load -- empty scene + custom cloth object
    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": [
            {
                "type": "DatasetObject",
                "name": "carpet",
                "category": "carpet",
                "model": "carpet_0",
                "prim_type": PrimType.CLOTH,
                "abilities": {"foldable": {}},
                "position": [0, 0, 0.5],
            },
            {
                "type": "DatasetObject",
                "name": "dishtowel",
                "category": "dishtowel",
                "model": "Tag_Dishtowel_Basket_Weave_Red",
                "prim_type": PrimType.CLOTH,
                "scale": 5.0,
                "abilities": {"foldable": {}},
                "position": [1, 1, 0.5],
            },
            {
                "type": "DatasetObject",
                "name": "shirt",
                "category": "t-shirt",
                "model": "t-shirt_000",
                "prim_type": PrimType.CLOTH,
                "scale": 0.05,
                "abilities": {"foldable": {}},
                "position": [-1, 1, 0.5],
                "orientation": [0.7071, 0., 0.7071, 0.],
            },
        ],
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1 / 60., physics_timestep=1 / 60.)

    # Grab object references
    carpet = env.scene.object_registry("name", "carpet")
    dishtowel = env.scene.object_registry("name", "dishtowel")
    shirt = env.scene.object_registry("name", "shirt")

    max_steps = 100 if short_exec else -1
    steps = 0

    # Criterion #1: the area of the convex hull of the projection of points onto the x-y plane should be reduced
    # Criterion #2: the diagonal of the convex hull of the projection of points onto the x-y plane should be reduced
    # Criterion #3: the face normals of the cloth should mostly point along the z-axis
    while steps != max_steps:
        og.sim.step()

        flag_area_reduction, flag_diagonal_reduction = carpet.states[Folded].check_projection_area_and_diagonal()
        flag_smoothness = carpet.states[Folded].check_smoothness()
        folded = flag_area_reduction and flag_diagonal_reduction and flag_smoothness
        info = 'carpet: [folded] %d [A] %d [D] %d [S] %d' % (folded, flag_area_reduction, flag_diagonal_reduction, flag_smoothness)

        flag_area_reduction, flag_diagonal_reduction = dishtowel.states[Folded].check_projection_area_and_diagonal()
        flag_smoothness = dishtowel.states[Folded].check_smoothness()
        folded = flag_area_reduction and flag_diagonal_reduction and flag_smoothness
        info += " || dishtowel: [folded] %d [A] %d [D] %d [S] %d" % (folded, flag_area_reduction, flag_diagonal_reduction, flag_smoothness)

        flag_area_reduction, flag_diagonal_reduction = shirt.states[Folded].check_projection_area_and_diagonal()
        flag_smoothness = shirt.states[Folded].check_smoothness()
        folded = flag_area_reduction and flag_diagonal_reduction and flag_smoothness
        info += " || tshirt: [folded] %d [A] %d [D] %d [S] %d" % (folded, flag_area_reduction, flag_diagonal_reduction, flag_smoothness)

        print(info)
        steps += 1

    # Shut down env at the end
    env.close()


if __name__ == "__main__":
    main()

Temperature Demo

This demo is useful for...

  • Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
  • Understanding how temperature changes are propagated to individual objects from individual heat sources or sinks
python -m omnigibson.examples.object_states.temperature_demo

This demo loads in various heat sources and sinks, and places an apple within close proximity to each of them. As the environment steps, each apple's temperature is printed in real-time, showcasing OmniGibson's rudimentary temperature dynamics.

temperature_demo.py
import logging
from collections import OrderedDict
import numpy as np

import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm

# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Demo of temperature change
    Loads a stove, a microwave and an oven, all toggled on, and five frozen apples
    The user can move the apples to see them change from frozen, to normal temperature, to cooked and burnt
    This demo also shows how to load objects ToggledOn and how to set the initial temperature of an object
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Define specific objects we want to load in with the scene directly
    obj_configs = []

    # Light
    obj_configs.append(OrderedDict(
        type="LightObject",
        light_type="Sphere",
        name="light",
        radius=0.01,
        intensity=1e5,
        position=[-2.0, -2.0, 1.0],
    ))

    # Stove
    obj_configs.append(OrderedDict(
        type="DatasetObject",
        name="stove",
        category="stove",
        model="101943",
        position=[0, 0, 0.65],
    ))

    # Microwave
    obj_configs.append(OrderedDict(
        type="DatasetObject",
        name="microwave",
        category="microwave",
        model="7128",
        scale=0.25,
        position=[2.5, 0, 0.094],
    ))

    # Oven
    obj_configs.append(OrderedDict(
        type="DatasetObject",
        name="oven",
        category="oven",
        model="7120",
        position=[-1.25, 0, 0.80],
    ))

    # Tray
    obj_configs.append(OrderedDict(
        type="DatasetObject",
        name="tray",
        category="tray",
        model="tray_000",
        scale=0.15,
        position=[0, 0, 1.24],
    ))

    # Fridge
    obj_configs.append(OrderedDict(
        type="DatasetObject",
        name="fridge",
        category="fridge",
        model="12252",
        abilities={
            "coldSource": {
                "temperature": -100.0,
                "requires_inside": True,
            }
        },
        position=[1.25, 0, 0.90],
    ))

    # 5 Apples
    for i in range(5):
        obj_configs.append(OrderedDict(
            type="DatasetObject",
            name=f"apple{i}",
            category="apple",
            model="00_0",
            position=[0, i * 0.05, 1.65],
        ))

    # Create the scene config to load -- empty scene with desired objects
    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": obj_configs,
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1 / 60., physics_timestep=1 / 60.)

    # Get reference to relevant objects
    stove = env.scene.object_registry("name", "stove")
    microwave = env.scene.object_registry("name", "microwave")
    oven = env.scene.object_registry("name", "oven")
    tray = env.scene.object_registry("name", "tray")
    fridge = env.scene.object_registry("name", "fridge")
    apples = list(env.scene.object_registry("category", "apple"))

    # Set camera to appropriate viewing pose
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([ 0.46938863, -3.97887141,  1.64106008]),
        orientation=np.array([0.63311689, 0.00127259, 0.00155577, 0.77405359]),
    )

    # Let objects settle
    for _ in range(25):
        env.step(np.array([]))

    # Turn on all scene objects
    stove.states[object_states.ToggledOn].set_value(True)
    microwave.states[object_states.ToggledOn].set_value(True)
    oven.states[object_states.ToggledOn].set_value(True)

    # Set initial temperature of the apples to -50 degrees Celsius, and move the apples to different objects
    for apple in apples:
        apple.states[object_states.Temperature].set_value(-50)
    apples[0].states[object_states.Inside].set_value(oven, True, use_ray_casting_method=True)
    apples[1].set_position(stove.links["heat_source_link"].get_position() + np.array([0, 0, 0.1]))
    apples[2].states[object_states.OnTop].set_value(tray, True, use_ray_casting_method=True)
    apples[3].states[object_states.Inside].set_value(fridge, True, use_ray_casting_method=True)
    apples[4].states[object_states.Inside].set_value(microwave, True, use_ray_casting_method=True)

    steps = 0
    max_steps = -1 if not short_exec else 1000

    # Main recording loop
    locations = [f'{loc:>20}' for loc in ["Inside oven", "On stove", "On tray", "Inside fridge", "Inside microwave"]]
    print()
    print(f"{'Apple location:':<20}", *locations)
    while steps != max_steps:
        env.step(np.array([]))
        temps = [f"{apple.states[object_states.Temperature].get_value():>20.2f}" for apple in apples]
        print(f"{'Apple temperature:':<20}", *temps, end="\r")
        steps += 1

    # Always close env at the end
    env.close()


if __name__ == "__main__":
    main()

Heat Source or Sink Demo

This demo is useful for...

  • Understanding how a heat source (or sink) is visualized in OmniGibson
  • Understanding how dynamic fire visuals are generated in real-time
python -m omnigibson.examples.object_states.heat_source_or_sink_demo

This demo loads in a stove and toggles its HeatSource on and off, showcasing the dynamic fire visuals available in OmniGibson.

heat_source_or_sink_demo.py
import numpy as np
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm

# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True


def main():
    # Create the scene config to load -- empty scene with a stove object added
    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": [
            {
                "type": "DatasetObject",
                "name": "stove",
                "category": "stove",
                "model": "101908",
                "abilities": {
                    "heatSource": {"requires_toggled_on": True},
                    "toggleable": {},
                },
                "position": [0, 0, 0.4],
            }
        ],
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Get reference to stove object
    stove = env.scene.object_registry("name", "stove")

    # Set camera to appropriate viewing pose
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([-0.0792399, -1.30104, 1.51981]),
        orientation=np.array([0.54897692, 0.00110359, 0.00168013, 0.83583509]),
    )

    # Make sure necessary object states are included with the stove
    assert object_states.HeatSourceOrSink in stove.states
    assert object_states.ToggledOn in stove.states

    # Take a few steps so that visibility propagates
    for _ in range(5):
        env.step(np.array([]))

    # Heat source is off.
    print("Heat source is OFF.")
    heat_source_state, heat_source_position = stove.states[object_states.HeatSourceOrSink].get_value()
    assert not heat_source_state

    # Toggle on stove, notify user
    input("Heat source will now turn ON: Press ENTER to continue.")
    stove.states[object_states.ToggledOn].set_value(True)

    assert stove.states[object_states.ToggledOn].get_value()

    # Need to take a step to update the state.
    env.step(np.array([]))

    # Heat source is on
    heat_source_state, heat_source_position = stove.states[object_states.HeatSourceOrSink].get_value()
    assert heat_source_state
    for _ in range(500):
        env.step(np.array([]))

    # Toggle off stove, notify user
    input("Heat source will now turn OFF: Press ENTER to continue.")
    stove.states[object_states.ToggledOn].set_value(False)
    assert not stove.states[object_states.ToggledOn].get_value()
    for _ in range(200):
        env.step(np.array([]))

    # Move stove, notify user
    input("Heat source is now moving: Press ENTER to continue.")
    stove.set_position(np.array([0, 1.0, 0.4]))
    for i in range(100):
        env.step(np.array([]))

    # Toggle on stove again, notify user
    input("Heat source will now turn ON: Press ENTER to continue.")
    stove.states[object_states.ToggledOn].set_value(True)
    assert stove.states[object_states.ToggledOn].get_value()
    for i in range(500):
        env.step(np.array([]))

    # Shutdown environment at end
    env.close()


if __name__ == "__main__":
    main()

Heated Demo

This demo is useful for...

  • Understanding how temperature modifications can cause objects' visual changes
  • Understanding how dynamic steam visuals are generated in real-time
python -m omnigibson.examples.object_states.heated_state_demo

This demo loads in three bowls, and immediately sets their temperatures past their Heated threshold. Steam is generated in real-time from these objects, and then disappears once the temperature of the objects drops below their Heated threshold.

heated_state_demo.py
import numpy as np
from collections import OrderedDict
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm

# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True


def main():
    # Define object configurations for objects to load -- we want to load a light and three bowls
    obj_configs = []

    obj_configs.append(OrderedDict(
        type="LightObject",
        light_type="Sphere",
        name="light",
        radius=0.01,
        intensity=1e5,
        position=[-2.0, -2.0, 1.0],
    ))

    for i, (scale, x) in enumerate(zip([0.5, 1.0, 2.0], [-0.6, 0, 0.8])):
        obj_configs.append(OrderedDict(
            type="DatasetObject",
            name=f"bowl{i}",
            category="bowl",
            model="68_0",
            scale=scale,
            abilities={"heatable": {}},
            position=[x, 0, 0.2],
        ))

    # Create the scene config to load -- empty scene with light object and bowls
    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": obj_configs,
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Set camera to appropriate viewing pose
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([ 0.182103, -2.07295 ,  0.14017 ]),
        orientation=np.array([0.77787037, 0.00267566, 0.00216149, 0.62841535]),
    )

    # Grab reference to objects of relevance
    objs = list(env.scene.object_registry("category", "bowl"))

    def report_states(objs):
        for obj in objs:
            print("=" * 20)
            print("object:", obj.name)
            print("temperature:", obj.states[object_states.Temperature].get_value())
            print("obj is heated:", obj.states[object_states.Heated].get_value())

    # Report default states
    print("==== Initial state ====")
    report_states(objs)

    # Notify user that we're about to heat the object
    input("Objects will be heated, and steam will slowly rise. Press ENTER to continue.")

    # Heated.
    for obj in objs:
        obj.states[object_states.Temperature].set_value(50)
    env.step(np.array([]))
    report_states(objs)

    # Take a look at the steam effect.
    # After a while, objects will be below the Steam temperature threshold.
    print("==== Objects are now heated... ====")
    print()
    for _ in range(2000):
        env.step(np.array([]))
        # Also print temperatures
        temps = [f"{obj.states[object_states.Temperature].get_value():>7.2f}" for obj in objs]
        print(f"obj temps:", *temps, end="\r")
    print()

    # Objects are not heated anymore.
    print("==== Objects are no longer heated... ====")
    report_states(objs)

    # Close environment at the end
    input("Demo completed. Press ENTER to shutdown environment.")
    env.close()


if __name__ == "__main__":
    main()

Object Texture Demo

This demo is useful for...

  • Understanding how different object states can result in texture changes
  • Understanding how to enable objects with texture-changing states
  • Understanding how to dynamically modify object states
python -m omnigibson.examples.object_states.object_state_texture_demo

This demo loads in a single object, and then dynamically modifies its state so that its texture changes with each modification.

object_state_texture_demo.py
import numpy as np
import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm, macros
from omnigibson.systems import WaterSystem
from omnigibson.utils.constants import ParticleModifyMethod

# Make sure object states are enabled
gm.ENABLE_OBJECT_STATES = True


def main():
    # Create the scene config to load -- empty scene plus a light and a cabinet
    cfg = {
        "scene": {
            "type": "Scene",
            "floor_plane_visible": True,
        },
        "objects": [
            {
                "type": "LightObject",
                "name": "light",
                "light_type": "Sphere",
                "radius": 0.01,
                "intensity": 1e5,
                "position": [-2.0, -2.0, 1.0],
            },
            {
                "type": "DatasetObject",
                "name": "cabinet",
                "category": "bottom_cabinet",
                "model": "45087",
                "abilities": {
                    "freezable": {},
                    "cookable": {},
                    "burnable": {},
                    "saturable": {},
                    "toggleable": {},
                    "particleRemover": {
                        "method": ParticleModifyMethod.ADJACENCY,
                        "conditions": {
                            # For a specific particle system, this specifies what conditions are required in order for the
                            # particle applier / remover to apply / remover particles associated with that system
                            # The list should contain functions with signature condition() --> bool,
                            # where True means the condition is satisfied
                            # In this case, we only allow our cabinet to absorb water, with no conditions needed.
                            # This is needed for the Saturated ("saturable") state so that we can modify the texture
                            # according to the water.
                            # NOTE: This will only change color if gm.ENABLE_HQ_RENDERING and gm.USE_GPU_DYNAMICS is
                            # enabled!
                            WaterSystem: [],
                        },
            },
                },
                "position": [0, 0, 0.55],
            },
        ],
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Set camera to appropriate viewing pose
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([ 1.7789 , -1.68822,  1.13551]),
        orientation=np.array([0.57065614, 0.20331904, 0.267029  , 0.74947212]),
    )

    # Grab reference to object of interest
    obj = env.scene.object_registry("name", "cabinet")

    # Make sure all the appropriate states are in the object
    assert object_states.Frozen in obj.states
    assert object_states.Cooked in obj.states
    assert object_states.Burnt in obj.states
    assert object_states.Saturated in obj.states
    assert object_states.ToggledOn in obj.states

    def report_states():
        # Make sure states are propagated before printing
        for i in range(5):
            env.step(np.array([]))

        print("=" * 20)
        print("temperature:", obj.states[object_states.Temperature].get_value())
        print("obj is frozen:", obj.states[object_states.Frozen].get_value())
        print("obj is cooked:", obj.states[object_states.Cooked].get_value())
        print("obj is burnt:", obj.states[object_states.Burnt].get_value())
        print("obj is soaked:", obj.states[object_states.Saturated].get_value(WaterSystem))
        print("obj is toggledon:", obj.states[object_states.ToggledOn].get_value())
        print("obj textures:", obj.get_textures())

    # Report default states
    print("==== Initial state ====")
    report_states()

    # Notify user that we're about to freeze the object, and then freeze the object
    input("\nObject will be frozen. Press ENTER to continue.")
    obj.states[object_states.Temperature].set_value(-50)
    report_states()

    # Notify user that we're about to cook the object, and then cook the object
    input("\nObject will be cooked. Press ENTER to continue.")
    obj.states[object_states.Temperature].set_value(100)
    report_states()

    # Notify user that we're about to burn the object, and then burn the object
    input("\nObject will be burned. Press ENTER to continue.")
    obj.states[object_states.Temperature].set_value(250)
    report_states()

    # Notify user that we're about to reset the object to its default state, and then reset state
    input("\nObject will be reset to default state. Press ENTER to continue.")
    obj.states[object_states.Temperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)
    obj.states[object_states.MaxTemperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)
    report_states()

    # Notify user that we're about to soak the object, and then soak the object
    input("\nObject will be saturated with water. Press ENTER to continue.")
    obj.states[object_states.Saturated].set_value(WaterSystem, True)
    report_states()

    # Notify user that we're about to unsoak the object, and then unsoak the object
    input("\nObject will be unsaturated with water. Press ENTER to continue.")
    obj.states[object_states.Saturated].set_value(WaterSystem, False)
    report_states()

    # Notify user that we're about to toggle on the object, and then toggle on the object
    input("\nObject will be toggled on. Press ENTER to continue.")
    obj.states[object_states.ToggledOn].set_value(True)
    report_states()

    # Notify user that we're about to toggle off the object, and then toggle off the object
    input("\nObject will be toggled off. Press ENTER to continue.")
    obj.states[object_states.ToggledOn].set_value(False)
    report_states()

    # Close environment at the end
    input("Demo completed. Press ENTER to shutdown environment.")
    env.close()


if __name__ == "__main__":
    main()

Particle Applier and Remover Demo

This demo is useful for...

  • Understanding how a ParticleRemover or ParticleApplier object can be generated
  • Understanding how particles can be dynamically generated on objects
  • Understanding different methods for applying and removing particles via the ParticleRemover or ParticleApplier object
python -m omnigibson.examples.object_states.particle_applier_remover_demo

This demo loads in a washtowel and table and lets you choose the ability configuration to enable the washtowel with. The washtowel will then proceed to either remove and generate particles dynamically on the table while moving.

particle_applier_remover_demo.py
import logging
import numpy as np
import omnigibson as og
from omnigibson.object_states import Covered
from omnigibson.objects import DatasetObject
from omnigibson.macros import gm, macros
from omnigibson.systems import *
from omnigibson.utils.usd_utils import create_joint
from omnigibson.utils.ui_utils import choose_from_options
from omnigibson.utils.constants import ParticleModifyMethod
from omni.isaac.core.utils.stage import add_reference_to_stage
from pxr import Gf

# Set macros for this example
macros.object_states.particle_modifier.VISUAL_PARTICLES_REMOVAL_LIMIT = 1000
macros.object_states.particle_modifier.FLUID_PARTICLES_REMOVAL_LIMIT = 8000
macros.object_states.particle_modifier.MAX_VISUAL_PARTICLES_APPLIED_PER_STEP = 10
macros.object_states.particle_modifier.MAX_FLUID_PARTICLES_APPLIED_PER_STEP = 40
StainSystem._N_PARTICLES_PER_GROUP = 300

# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for fluids)
gm.ENABLE_OBJECT_STATES = True
gm.USE_GPU_DYNAMICS = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Demo of ParticleApplier and ParticleRemover object states, which enable objects to either apply arbitrary
    particles and remove arbitrary particles from the simulator, respectively.

    Loads an empty scene with a table, and starts clean to allow particles to be applied or pre-covers the table
    with particles to be removed. The ParticleApplier / ParticleRemover state is applied to an imported cloth object
    and allowed to interact with the table, applying / removing particles from the table.
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Choose what configuration to load
    modifier_type = choose_from_options(
        options={
            "particleApplier": "Demo object's ability to apply particles in the simulator",
            "particleRemover": "Demo object's ability to remove particles from the simulator",
        },
        name="particle modifier type",
        random_selection=random_selection,
    )

    modification_metalink = {
        "particleApplier": "particleapplication_link",
        "particleRemover": "particleremover_link",
    }

    particle_mapping = {system.name: system for system in [StainSystem, WaterSystem]}
    particle_type = choose_from_options(
        options={name: f"{name} particles will be applied or removed from the simulator" for name in particle_mapping},
        name="particle type",
        random_selection=random_selection,
    )
    particle_system = particle_mapping[particle_type]

    modification_method = {
        "Adjacency": ParticleModifyMethod.ADJACENCY,
        "Projection": ParticleModifyMethod.PROJECTION,
    }

    projection_mesh_params = {
        "Adjacency": None,
        "Projection": {
            # Either Cone or Cylinder; shape of the projection where particles can be applied / removed
            "type": "Cone",
            # Size of the cone
            "extents": np.array([0.375, 0.375, 0.75]),
        },
    }

    method_type = choose_from_options(
        options={
            "Adjacency": "Close proximity to the object will be used to determine whether particles can be applied / removed",
            "Projection": "A Cone or Cylinder shape protruding from the object will be used to determine whether particles can be applied / removed",
        },
        name="modifier method type",
        random_selection=random_selection,
    )

    # Create the ability kwargs to pass to the object state
    abilities = {
        modifier_type: {
            "method": modification_method[method_type],
            "conditions": {
                # For a specific particle system, this specifies what conditions are required in order for the
                # particle applier / remover to apply / remover particles associated with that system
                # The list should contain functions with signature condition() --> bool,
                # where True means the condition is satisified
                particle_system: [],
            },
            "projection_mesh_params": projection_mesh_params[method_type],
        }
    }

    # Define objects to load: a light, table, and cloth
    light_cfg = OrderedDict(
        type="LightObject",
        name="light",
        light_type="Sphere",
        radius=0.01,
        intensity=1e5,
        position=[-2.0, -2.0, 2.0],
    )

    table_cfg = OrderedDict(
        type="DatasetObject",
        name="table",
        category="breakfast_table",
        model="265851637a59eb2f882f822c83877cbc",
        scale=[4.0, 4.0, 4.0],
        position=[0, 0, 0.7],
    )

    # Create the scene config to load -- empty scene with a light and table
    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": [light_cfg, table_cfg],
    }

    # Sanity check inputs: Remover + Adjacency + Fluid will not work because we are using a visual_only
    # object, so contacts will not be triggered with this object

    # Load the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Grab references to table
    table = env.scene.object_registry("name", "table")

    # Set the viewer camera appropriately
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([-1.11136405, -1.12709412,  1.99587299]),
        orientation=np.array([ 0.44662832, -0.17829795, -0.32506992,  0.81428652]),
    )

    # Let objects settle first
    for _ in range(10):
        env.step(np.array([]))

    # If we're using a projection volume, we manually add in the required metalink required in order to use the volume
    modifier = DatasetObject(
        prim_path="/World/modifier",
        name="modifier",
        category="dishtowel",
        model="Tag_Dishtowel_Basket_Weave_Red",
        scale=np.ones(3) * 2.0,
        visual_only=method_type == "Projection" or particle_system == StainSystem,  # Fluid + adjacency requires the object to have collision geoms active
        abilities=abilities,
    )
    modifier_root_link_path = f"{modifier.prim_path}/base_link"
    modifier._prim = modifier._load(og.sim)
    if method_type == "Projection":
        metalink_path = f"{modifier.prim_path}/{modification_metalink[modifier_type]}"
        og.sim.stage.DefinePrim(metalink_path, "Xform")
        joint_prim = create_joint(
            prim_path=f"{modifier_root_link_path}/{modification_metalink[modifier_type]}_joint",
            body0=modifier_root_link_path,
            body1=metalink_path,
            joint_type="FixedJoint",
            enabled=True,
        )
        local_area_quat = np.array([0, 0.707, 0, 0.707])    # Needs to rotated so the metalink points downwards from cloth
        joint_prim.GetAttribute("physics:localRot0").Set(Gf.Quatf(*(local_area_quat[[3, 0, 1, 2]])))
    modifier._post_load()
    modifier._loaded = True
    og.sim.import_object(modifier)
    modifier.set_position(np.array([0, 0, 5.0]))

    # Take a step to make sure all objects are properly initialized
    for _ in range(25):
        env.step(np.array([]))

    # If we're removing particles, set the table's covered state to be True
    if modifier_type == "particleRemover":
        table.states[Covered].set_value(particle_system, True)

        # Take a few steps to let particles settle
        for _ in range(25):
            env.step(np.array([]))

    # Enable camera teleoperation for convenience
    og.sim.enable_viewer_camera_teleoperation()

    # Set the modifier object to be in position to modify particles
    if method_type == "Projection":
        # Higher z to showcase projection volume at work
        z = 1.85
    elif particle_system == StainSystem:
        # Lower z needed to allow for adjacency bounding box to overlap properly
        z = 1.175
    else:
        # Higher z needed for actual physical interaction to accomodate non-negligible particle radius
        z = 1.22
    modifier.keep_still()
    modifier.set_position_orientation(
        position=np.array([0, 0.3, z]),
        orientation=np.array([0, 0, 0, 1.0]),
    )

    # Move object in square around table
    deltas = [
        [150, np.array([-0.01, 0, 0])],
        [60, np.array([0, -0.01, 0])],
        [150, np.array([0.01, 0, 0])],
        [60, np.array([0, 0.01, 0])],
    ]
    for t, delta in deltas:
        for i in range(t):
            modifier.set_position(modifier.get_position() + delta)
            # env.step(np.array([]))
            og.sim.step()

    # Always shut down environment at the end
    env.close()


if __name__ == "__main__":
    main()

Kinematics Demo

This demo is useful for...

  • Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
  • Understanding how to import additional objects after the environment is created
python -m omnigibson.examples.object_states.sample_kinematics_demo

This demo procedurally generates a mini populated scene, spawning in a cabinet and placing boxes in its shelves, and then generating a microwave on a cabinet with a plate and apples sampled both inside and on top of it.

sample_kinematics_demo.py
import logging
import os

import numpy as np

import omnigibson as og
from omnigibson import object_states
from omnigibson.macros import gm
from omnigibson.objects import DatasetObject


# Make sure object states and global contact reporting are enabled
gm.ENABLE_OBJECT_STATES = True
gm.ENABLE_GLOBAL_CONTACT_REPORTING = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Demo to use the raycasting-based sampler to load objects onTop and/or inside another
    Loads a cabinet, a microwave open on top of it, and two plates with apples on top, one inside and one on top of the cabinet
    Then loads a shelf and cracker boxes inside of it
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Create the scene config to load -- empty scene
    cfg = {
        "scene": {
            "type": "Scene",
        }
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)
    env.step([])

    # Sample microwave and boxes
    sample_boxes_on_shelf(env)
    sample_microwave_plates_apples(env)

    max_steps = 100 if short_exec else -1
    step = 0
    while step != max_steps:
        env.step(np.array([]))
        step += 1

    # Always close environment at the end
    env.close()


def sample_microwave_plates_apples(env):
    # Load cabinet, set position manually, and step 100 times
    logging.info("Loading cabinet and microwave")

    microwave = DatasetObject(
        prim_path="/World/microwave",
        name="microwave",
        category="microwave",
        model="7128",
        scale=0.5,
    )
    og.sim.import_object(microwave)
    microwave.set_position(np.array([0, 0, 5.0]))
    env.step(np.array([]))              # One step is needed for the object to be fully initialized

    cabinet = DatasetObject(
        prim_path="/World/cabinet",
        name="cabinet",
        category="bottom_cabinet",
        model="46380",
    )

    og.sim.import_object(cabinet)
    z_offset = -cabinet.aabb_center[2] + cabinet.aabb_extent[2] / 2
    cabinet.set_position(np.array([1.0, 0, z_offset]))
    env.step(np.array([]))              # One step is needed for the object to be fully initialized

    # Set microwave on top of the cabinet, open it, and step 100 times
    logging.info("Placing microwave OnTop of the cabinet")
    assert microwave.states[object_states.OnTop].set_value(cabinet, True, use_ray_casting_method=True)
    assert microwave.states[object_states.Open].set_value(True)
    logging.info("Microwave loaded and placed")
    for _ in range(50):
        env.step(np.array([]))

    logging.info("Loading plates")
    n_plates = 2
    n_apples = 2
    for i in range(n_plates):
        plate = DatasetObject(
            prim_path=f"/World/plate{i}",
            name=f"plate{i}",
            category="plate",
            model="plate_000",
            bounding_box=np.array([0.25, 0.25, 0.05]),
        )
        og.sim.import_object(plate)
        env.step(np.array([]))              # One step is needed for the object to be fully initialized

        # Put the 1st plate in the microwave
        if i == 0:
            logging.info("Loading plate Inside the microwave")
            assert plate.states[object_states.Inside].set_value(microwave, True, use_ray_casting_method=True)
        else:
            logging.info("Loading plate OnTop the microwave")
            assert plate.states[object_states.OnTop].set_value(microwave, True, use_ray_casting_method=True)

        logging.info("Plate %d loaded and placed." % i)
        for _ in range(50):
            env.step(np.array([]))

        logging.info("Loading three apples OnTop of the plate")
        for j in range(n_apples):
            apple = DatasetObject(
                prim_path=f"/World/apple{i * n_apples + j}",
                name=f"apple{i * n_apples + j}",
                category="apple",
                model="00_0",
            )
            og.sim.import_object(apple)
            env.step(np.array([]))  # One step is needed for the object to be fully initialized
            assert apple.states[object_states.OnTop].set_value(plate, True, use_ray_casting_method=True)
            logging.info("Apple %d loaded and placed." % j)
            for _ in range(50):
                env.step(np.array([]))


def sample_boxes_on_shelf(env):
    shelf = DatasetObject(
        prim_path=f"/World/shelf",
        name=f"shelf",
        category="shelf",
        model="1170df5b9512c1d92f6bce2b7e6c12b7",
        bounding_box=np.array([1.0, 0.4, 2.0]),
    )
    og.sim.import_object(shelf)
    z_offset = -shelf.aabb_center[2] + shelf.aabb_extent[2] / 2
    shelf.set_position(np.array([-1.0, 0, z_offset]))
    env.step(np.array([]))  # One step is needed for the object to be fully initialized

    logging.info("Shelf placed")
    for _ in range(50):
        env.step(np.array([]))

    for i in range(5):
        box = DatasetObject(
            prim_path=f"/World/box{i}",
            name=f"box{i}",
            category="cracker_box",
            model="cracker_box_000",
            bounding_box=np.array([0.2, 0.05, 0.3]),
        )
        og.sim.import_object(box)
        env.step(np.array([]))  # One step is needed for the object to be fully initialized
        box.states[object_states.Inside].set_value(shelf, True, use_ray_casting_method=True)
        logging.info(f"Box {i} placed.")

        for _ in range(50):
            env.step(np.array([]))


if __name__ == "__main__":
    main()

Slicing Demo

This demo is useful for...

  • Understanding how slicing works in OmniGibson
  • Understanding how to access individual objects once the environment is created
python -m omnigibson.examples.object_states.slicing_demo

This demo spawns an apple on a table with a knife above it, and lets the knife fall to "cut" the apple in half.

slicing_demo.py
import logging
import numpy as np
from collections import OrderedDict

import omnigibson as og
from omnigibson.macros import gm
import omnigibson.utils.transform_utils as T

# Make sure object states, contact reporting, and transition rules are enabled
gm.ENABLE_OBJECT_STATES = True
gm.ENABLE_GLOBAL_CONTACT_REPORTING = True
gm.ENABLE_TRANSITION_RULES = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Demo to use the raycasting-based sampler to load objects onTop and/or inside another
    Loads a cabinet, a microwave open on top of it, and two plates with apples on top, one inside and one on top of the cabinet
    Then loads a shelf and cracker boxes inside of it
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Create the scene config to load -- empty scene with table, knife, and apple
    table_cfg = OrderedDict(
        type="DatasetObject",
        name="table",
        category="breakfast_table",
        model="19203",
        scale=0.9,
        position=[0, 0, 0.532],
    )

    apple_cfg = OrderedDict(
        type="DatasetObject",
        name="apple",
        category="apple",
        model="00_0",
        scale=1.5,
        position=[0.085, 0,  0.90],
    )

    knife_cfg = OrderedDict(
        type="DatasetObject",
        name="knife",
        category="table_knife",
        model="4",
        scale=2.5,
        position=[0, 0, 10.0],
    )

    light0_cfg = OrderedDict(
        type="LightObject",
        name="light0",
        light_type="Sphere",
        radius=0.01,
        intensity=4000.0,
        position=[1.217, -0.848, 1.388],
    )

    light1_cfg = OrderedDict(
        type="LightObject",
        name="light1",
        light_type="Sphere",
        radius=0.01,
        intensity=4000.0,
        position=[-1.217, 0.848, 1.388],
    )

    cfg = {
        "scene": {
            "type": "Scene",
        },
        "objects": [table_cfg, apple_cfg, knife_cfg, light0_cfg, light1_cfg]
    }

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Grab reference to apple and knife
    apple = env.scene.object_registry("name", "apple")
    knife = env.scene.object_registry("name", "knife")

    # Update the simulator's viewer camera's pose so it points towards the table
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([ 0.544888, -0.412084,  1.11569 ]),
        orientation=np.array([0.54757518, 0.27792802, 0.35721896, 0.70378409]),
    )

    # Let apple settle
    for _ in range(50):
        env.step(np.array([]))

    knife.keep_still()
    knife.set_position_orientation(
        position=apple.get_position() + np.array([-0.15, 0.0, 0.2]),
        orientation=T.euler2quat([-np.pi / 2, 0, 0]),
    )

    input("The knife will fall on the apple and slice it. Press [ENTER] to continue.")

    # Step simulation for a bit so that apple is sliced
    for i in range(1000):
        env.step(np.array([]))

    input("Apple has been sliced! Press [ENTER] to terminate the demo.")

    # Always close environment at the end
    env.close()


if __name__ == "__main__":
    main()

🤖 Robots

These examples showcase how to interact and leverage robot objects in OmniGibson.

Robot Visualizer Demo

This demo is useful for...

  • Understanding how to load a robot into OmniGibson after an environment is created
  • Accessing all OmniGibson robot models
  • Viewing robots' low-level joint motion
python -m omnigibson.examples.robots.all_robots_visualizer

This demo iterates over all robots in OmniGibson, loading each one into an empty scene and randomly moving its joints for a brief amount of time.

all_robots_visualizer.py
import logging

import numpy as np

import omnigibson as og
from omnigibson.robots import REGISTERED_ROBOTS
from omnigibson.scenes import Scene


def main(random_selection=False, headless=False, short_exec=False):
    """
    Robot demo
    Loads all robots in an empty scene, generate random actions
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)
    # Create empty scene with no robots in it initially
    cfg = {
        "scene": {
            "type": "Scene",
        }
    }

    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Iterate over all robots and demo their motion
    for robot_name, robot_cls in REGISTERED_ROBOTS.items():
        # Create and import robot
        robot = robot_cls(
            prim_path=f"/World/{robot_name}",
            name=robot_name,
            obs_modalities=[],              # We're just moving robots around so don't load any observation modalities
        )
        og.sim.import_object(robot)

        # At least one step is always needed while sim is playing for any imported object to be fully initialized
        og.sim.play()
        og.sim.step()

        # Reset robot and make sure it's not moving
        robot.reset()
        robot.keep_still()

        # Log information
        logging.info(f"Loaded {robot_name}")
        logging.info(f"Moving {robot_name}")

        if not headless:
            # Set viewer in front facing robot
            og.sim.viewer_camera.set_position_orientation(
                position=np.array([ 2.69918369, -3.63686664,  4.57894564]),
                orientation=np.array([0.39592411, 0.1348514 , 0.29286304, 0.85982   ]),
            )

        og.sim.enable_viewer_camera_teleoperation()

        # Hold still briefly so viewer can see robot
        for _ in range(100):
            og.sim.step()

        # Then apply random actions for a bit
        for _ in range(30):
            action = np.random.uniform(-1, 1, robot.action_dim)
            for _ in range(10):
                env.step(action)

        # Re-import the scene
        og.sim.stop()
        og.sim.import_scene(Scene())

    # Always shut down the environment cleanly at the end
    env.close()


if __name__ == "__main__":
    main()

Robot Control Demo

This demo is useful for...

  • Understanding how different controllers can be used to control robots
  • Understanding how to teleoperate a robot through external commands
python -m omnigibson.examples.robots.robot_control_example

This demo lets you choose a robot and the set of controllers to control the robot, and then lets you teleoperate the robot using your keyboard.

robot_control_example.py
"""
Example script demo'ing robot control.

Options for random actions, as well as selection of robot action space
"""
import logging
from collections import OrderedDict

import numpy as np

import omnigibson as og
from omnigibson.macros import gm
from omnigibson.robots import REGISTERED_ROBOTS
from omnigibson.utils.ui_utils import choose_from_options, KeyboardRobotController


CONTROL_MODES = OrderedDict(
    random="Use autonomous random actions (default)",
    teleop="Use keyboard control",
)

SCENES = OrderedDict(
    Rs_int="Realistic interactive home environment (default)",
    empty="Empty environment with no objects",
)

# Don't use GPU dynamics and Use flatcache for performance boost
gm.USE_GPU_DYNAMICS = False
gm.ENABLE_FLATCACHE = True


def choose_controllers(robot, random_selection=False):
    """
    For a given robot, iterates over all components of the robot, and returns the requested controller type for each
    component.

    :param robot: BaseRobot, robot class from which to infer relevant valid controller options
    :param random_selection: bool, if the selection is random (for automatic demo execution). Default False

    :return OrderedDict: Mapping from individual robot component (e.g.: base, arm, etc.) to selected controller names
    """
    # Create new dict to store responses from user
    controller_choices = OrderedDict()

    # Grab the default controller config so we have the registry of all possible controller options
    default_config = robot._default_controller_config

    # Iterate over all components in robot
    for component, controller_options in default_config.items():
        # Select controller
        options = list(sorted(controller_options.keys()))
        choice = choose_from_options(
            options=options, name="{} controller".format(component), random_selection=random_selection
        )

        # Add to user responses
        controller_choices[component] = choice

    return controller_choices


def main(random_selection=False, headless=False, short_exec=False):
    """
    Robot control demo with selection
    Queries the user to select a robot, the controllers, a scene and a type of input (random actions or teleop)
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Choose scene to load
    scene_model = choose_from_options(options=SCENES, name="scene", random_selection=random_selection)

    # Choose robot to create
    robot_name = choose_from_options(
        options=list(sorted(REGISTERED_ROBOTS.keys())), name="robot", random_selection=random_selection
    )

    # Create the config for generating the environment we want
    scene_cfg = OrderedDict()
    if scene_model == "empty":
        scene_cfg["type"] = "Scene"
    else:
        scene_cfg["type"] = "InteractiveTraversableScene"
        scene_cfg["scene_model"] = scene_model

    # Add the robot we want to load
    robot0_cfg = OrderedDict()
    robot0_cfg["type"] = robot_name
    robot0_cfg["obs_modalities"] = ["rgb", "depth", "seg_instance", "normal", "scan", "occupancy_grid"]
    robot0_cfg["action_type"] = "continuous"
    robot0_cfg["action_normalize"] = True

    # Compile config
    cfg = OrderedDict(scene=scene_cfg, robots=[robot0_cfg])

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Choose robot controller to use
    robot = env.robots[0]
    controller_choices = choose_controllers(robot=robot, random_selection=random_selection)

    # Choose control mode
    if random_selection:
        control_mode = "random"
    else:
        control_mode = choose_from_options(options=CONTROL_MODES, name="control mode")

    # Update the control mode of the robot
    controller_config = {component: {"name": name} for component, name in controller_choices.items()}
    robot.reload_controllers(controller_config=controller_config)

    # Update the simulator's viewer camera's pose so it points towards the robot
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([1.46949, -3.97358, 2.21529]),
        orientation=np.array([0.56829048, 0.09569975, 0.13571846, 0.80589577]),
    )

    # Reset environment
    env.reset()

    # Create teleop controller
    action_generator = KeyboardRobotController(robot=robot)

    # Print out relevant keyboard info if using keyboard teleop
    if control_mode == "teleop":
        action_generator.print_keyboard_teleop_info()

    # Other helpful user info
    print("Running demo.")
    print("Press ESC to quit")

    # Loop control until user quits
    max_steps = -1 if not short_exec else 100
    step = 0
    while step != max_steps:
        action = action_generator.get_random_action() if control_mode == "random" else action_generator.get_teleop_action()
        for _ in range(10):
            env.step(action=action)
            step += 1

    # Always shut down the environment cleanly at the end
    env.close()


if __name__ == "__main__":
    main()

Robot Grasping Demo

This demo is useful for...

  • Understanding the difference between physical and sticky (1) grasping
  • Understanding how to teleoperate a robot through external commands
  1. physical means natural friction is required to hold objects, sticky means that objects are constrained to the robot's gripper once contact is made
python -m omnigibson.examples.robots.grasping_mode_example

This demo lets you choose a grasping mode and then loads a Fetch robot and a cube on a table. You can then teleoperate the robot to grasp the cube, observing the difference is grasping behavior based on the grasping mode chosen.

grasping_mode_example.py
"""
Example script demo'ing robot manipulation control with grasping.
"""
import logging
from collections import OrderedDict

import numpy as np

import omnigibson as og
from omnigibson.macros import gm
from omnigibson.utils.ui_utils import choose_from_options, KeyboardRobotController

GRASPING_MODES = OrderedDict(
    sticky="Sticky Mitten - Objects are magnetized when they touch the fingers and a CLOSE command is given",
    physical="Physical Grasping - No additional grasping assistance applied",
)

# Don't use GPU dynamics and Use flatcache for performance boost
gm.USE_GPU_DYNAMICS = False
gm.ENABLE_FLATCACHE = True


def main(random_selection=False, headless=False, short_exec=False):
    """
    Robot grasping mode demo with selection
    Queries the user to select a type of grasping mode and GUI
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    # Choose type of grasping
    grasping_mode = choose_from_options(options=GRASPING_MODES, name="grasping mode", random_selection=random_selection)

    # Create environment configuration to use
    scene_cfg = OrderedDict(type="Scene")
    robot0_cfg = OrderedDict(
        type="Fetch",
        obs_modalities=["rgb"],     # we're just doing a grasping demo so we don't need all observation modalities
        action_type="continuous",
        action_normalize=True,
        grasping_mode=grasping_mode,
    )

    # Define objects to load
    table_cfg = OrderedDict(
        type="DatasetObject",
        name="table",
        category="breakfast_table",
        model="1b4e6f9dd22a8c628ef9d976af675b86",
        bounding_box=[0.5, 0.5, 0.8],
        fit_avg_dim_volume=False,
        fixed_base=True,
        position=[0.7, -0.1, 0.6],
        orientation=[0, 0, 0.707, 0.707],
    )

    chair_cfg = OrderedDict(
        type="DatasetObject",
        name="chair",
        category="straight_chair",
        model="2a8d87523e23a01d5f40874aec1ee3a6",
        bounding_box=None,
        fit_avg_dim_volume=True,
        fixed_base=False,
        position=[0.45, 0.65, 0.425],
        orientation=[0, 0, -0.9990215, -0.0442276],
    )

    box_cfg = OrderedDict(
        type="PrimitiveObject",
        name="box",
        primitive_type="Cube",
        rgba=[1.0, 0, 0, 1.0],
        size=0.05,
        position=[0.53, -0.1, 0.97],
    )

    # Compile config
    cfg = OrderedDict(scene=scene_cfg, robots=[robot0_cfg], objects=[table_cfg, chair_cfg, box_cfg])

    # Create the environment
    env = og.Environment(configs=cfg, action_timestep=1/60., physics_timestep=1/60.)

    # Reset the robot
    robot = env.robots[0]
    robot.set_position([0, 0, 0])
    robot.reset()
    robot.keep_still()

    # Update the simulator's viewer camera's pose so it points towards the robot
    og.sim.viewer_camera.set_position_orientation(
        position=np.array([-2.39951,  2.26469,  2.66227]),
        orientation=np.array([-0.23898481,  0.48475231,  0.75464013, -0.37204802]),
    )

    # Create teleop controller
    action_generator = KeyboardRobotController(robot=robot)

    # Print out relevant keyboard info if using keyboard teleop
    action_generator.print_keyboard_teleop_info()

    # Other helpful user info
    print("Running demo with grasping mode {}.".format(grasping_mode))
    print("Press ESC to quit")

    # Loop control until user quits
    max_steps = -1 if not short_exec else 100
    step = 0
    while step != max_steps:
        action = action_generator.get_random_action() if random_selection else action_generator.get_teleop_action()
        for _ in range(10):
            env.step(action)
            step += 1

    # Always shut down the environment cleanly at the end
    env.close()


if __name__ == "__main__":
    main()

🧰 Simulator

These examples showcase useful functionality from OmniGibson's monolithic Simulator object.

What's the difference between Environment and Simulator?

The Simulator class is a lower-level object that:

  • handles importing scenes and objects into the actual simulation
  • directly interfaces with the underlying physics engine

The Environment class thinly wraps the Simulator's core functionality, by:

  • providing convenience functions for automatically importing a predefined scene, object(s), and robot(s) (via the cfg argument), as well as a task
  • providing a OpenAI Gym interface for stepping through the simulation

While most of the core functionality in Environment (as well as more fine-grained physics control) can be replicated via direct calls to Simulator (og.sim), it requires deeper understanding of OmniGibson's infrastructure and is not recommended for new users.

State Saving and Loading Demo

This demo is useful for...

  • Understanding how to interact with objects using the mouse
  • Understanding how to save the active simulator state to a file
  • Understanding how to restore the simulator state from a given file
python -m omnigibson.examples.simulator.sim_save_load_example

This demo loads a stripped-down scene with the Turtlebot robot, and lets you interact with objects to modify the scene. The state is then saved, written to a .json file, and then restored in the simulation.

sim_save_load_example.py
import os
import logging
import numpy as np

import omnigibson as og
from omnigibson.utils.ui_utils import KeyboardEventHandler
import carb

TEST_OUT_PATH = ""  # Define output directory here.

def main(random_selection=False, headless=False, short_exec=False):
    """
    Prompts the user to select whether they are saving or loading an environment, and interactively
    shows how an environment can be saved or restored.
    """
    logging.info("*" * 80 + "\nDescription:" + main.__doc__ + "*" * 80)

    cfg = {
        "scene": {
            "type": "InteractiveTraversableScene",
            "scene_model": "Rs_int",
            "load_object_categories": ["floors", "walls", "bed", "bottom_cabinet", "chair"],
        },
        "robots": [
            {
                "type": "Turtlebot",
                "obs_modalities": ["rgb", "depth"],
            },
        ],
    }

    # Create the environment
    env = og.Environment(configs=cfg)

    # Set the camera to a good angle
    def set_camera_pose():
        og.sim.viewer_camera.set_position_orientation(
            position=np.array([-0.229375, -3.40576 ,  7.26143 ]),
            orientation=np.array([ 0.27619733, -0.00230233, -0.00801152,  0.9610648 ]),
        )
    set_camera_pose()

    # Give user instructions, and then loop until completed
    completed = short_exec
    if not short_exec and not random_selection:
        # Notify user to manipulate environment until ready, then press Z to exit
        print()
        print("Modify the scene by SHIFT + left clicking objects and dragging them. Once finished, press Z.")
        # Register callback so user knows to press space once they're done manipulating the scene
        def complete_loop():
            nonlocal completed
            completed = True
        KeyboardEventHandler.add_keyboard_callback(carb.input.KeyboardInput.Z, complete_loop)
    while not completed:
        env.step(np.random.uniform(-1, 1, env.robots[0].action_dim))

    print("Completed scene modification, saving scene...")
    save_path = os.path.join(TEST_OUT_PATH, "saved_stage.json")
    og.sim.save(json_path=save_path)

    print("Re-loading scene...")
    og.sim.restore(json_path=save_path)

    # Take a sim step and play
    og.sim.step()
    og.sim.play()
    set_camera_pose()

    # Loop until user terminates
    completed = short_exec
    if not short_exec and not random_selection:
        # Notify user to manipulate environment until ready, then press Z to exit
        print()
        print("View reloaded scene. Once finished, press Z.")
        # Register callback so user knows to press space once they're done manipulating the scene
        KeyboardEventHandler.add_keyboard_callback(carb.input.KeyboardInput.Z, complete_loop)
    while not completed:
        env.step(np.zeros(env.robots[0].action_dim))

    # Shutdown omnigibson at the end
    og.shutdown()


if __name__ == "__main__":
    main()