OmniGibson ships with many demo scripts highlighting its modularity and diverse feature set intended as a set of building blocks enabling your research. Let's try them out!
⚙️ A quick word about macros
Why macros?
Macros enforce global behavior that is consistent within an individual python process but can differ between processes. This is useful because globally enabling all of OmniGibson's features can cause unecessary slowdowns (1), and so configuring the macros for your specific use case can optimize performance.
For example, Omniverse provides a so-called flatcache feature which provides significant performance boosts, but cannot be used when fluids or soft bodies are present. So, we ideally should always have gm.USE_FLATCACHE=True unless we have fluids or soft bodies in our environment.
macros define a globally available set of magic numbers or flags set throughout OmniGibson. These can either be directly set in omnigibson.macros.py, or can be programmatically modified at runtime via:
gm refers to the "global" macros -- i.e.: settings that generally impact the entire OmniGibson stack. These are usually the only settings you may need to modify.
macros captures all remaining macros defined throughout OmniGibson's codebase -- these are often hardcoded default settings or magic numbers defined in a specific module. These can also be overridden, but we recommend inspecting the module first to understand how it is used.
Many of our examples set various macros settings at the beginning of the script, and is a good way to understand use cases for modifying them!
🌎 Environments
These examples showcase the full OmniGibson stack in use, and the types of environments immediately supported.
BEHAVIOR Task Demo
This demo is useful for...
Understanding how to instantiate a BEHAVIOR task
Understanding how a pre-defined configuration file is used
This demo instantiates one of our BEHAVIOR tasks (and optionally sampling object locations online) in a fully-populated scene and loads a Fetch robot. The robot executes random actions and the environment is reset periodically.
importloggingimportosimportyamlimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" Generates a BEHAVIOR Task environment from a pre-defined configuration file. It steps the environment 100 times with random actions sampled from the action space, using the Gym interface, resetting it 10 times. """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Ask the user whether they want online object sampling or notsampling_options={False:"Use a pre-sampled cached BEHAVIOR activity scene",True:"Sample the BEHAVIOR activity in an online fashion",}should_sample=choose_from_options(options=sampling_options,name="online object sampling",random_selection=random_selection)# Load the pre-selected configuration and set the online_sampling flagconfig_filename=os.path.join(og.example_config_path,"fetch_behavior.yaml")cfg=yaml.load(open(config_filename,"r"),Loader=yaml.FullLoader)cfg["task"]["online_object_sampling"]=should_sample# If we're online sampling, make sure global contacts are enabled so we can accurately detect kinematic changesifshould_sample:gm.ENABLE_GLOBAL_CONTACT_REPORTING=True# Load the environmentenv=og.Environment(configs=cfg)# Allow user to move camera more easilyog.sim.enable_viewer_camera_teleoperation()# Run a simple loop and reset periodicallymax_iterations=10ifnotshort_execelse1forjinrange(max_iterations):logging.info("Resetting environment")env.reset()foriinrange(100):action=env.action_space.sample()state,reward,done,info=env.step(action)ifdone:logging.info("Episode finished after {} timesteps".format(i+1))break# Always close the environment at the endenv.close()if__name__=="__main__":main()
Navigation Task Demo
This demo is useful for...
Understanding how to instantiate a navigation task
Understanding how a pre-defined configuration file is used
This demo instantiates one of our navigation tasks in a fully-populated scene and loads a Turtlebot robot. The robot executes random actions and the environment is reset periodically.
importloggingimportosimportyamlimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.utils.ui_utilsimportchoose_from_options# Use flatcache for performance speedups unless we're using GPU dynamicsifnotgm.USE_GPU_DYNAMICS:gm.ENABLE_FLATCACHE=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select a type of scene and loads a turtlebot into it, generating a Point-Goal navigation task within the environment. It steps the environment 100 times with random actions sampled from the action space, using the Gym interface, resetting it 10 times. """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Infer which config to load based on the scene selectedscene_options={"InteractiveTraversableScene":"Rs_int scene with fully interactive objects","StaticTraversableScene":"Adrian scene mesh with no interactive objects",}scene_type=choose_from_options(options=scene_options,name="scene type",random_selection=random_selection)config_name="turtlebot_nav"ifscene_type=="InteractiveTraversableScene"else"turtlebot_static_nav"config_filename=os.path.join(og.example_config_path,f"{config_name}.yaml")config=yaml.load(open(config_filename,"r"),Loader=yaml.FullLoader)# If the scene type is interactive, also check if we want to quick load or full load the sceneifscene_type=="InteractiveTraversableScene":load_options={"Quick":"Only load the building assets (i.e.: the floors, walls, ceilings)","Full":"Load all interactive objects in the scene",}load_mode=choose_from_options(options=load_options,name="load mode",random_selection=random_selection)ifload_mode=="Quick":config["scene"]["load_object_categories"]=["floors","walls","ceilings"]# Load the environmentenv=og.Environment(configs=config)# Allow user to move camera more easilyog.sim.enable_viewer_camera_teleoperation()# Run a simple loop and reset periodicallymax_iterations=10ifnotshort_execelse1forjinrange(max_iterations):logging.info("Resetting environment")env.reset()foriinrange(100):action=env.action_space.sample()state,reward,done,info=env.step(action)ifdone:logging.info("Episode finished after {} timesteps".format(i+1))break# Always close the environment at the endenv.close()if__name__=="__main__":main()
Config Selection Demo
This demo is useful for...
Understanding how pre-defined configuration files are used
importloggingimportosimportyamlimportomnigibsonasogfromomnigibson.utils.asset_utilsimportfolder_is_hiddenfromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select any available interactive scene and loads a turtlebot into it. It steps the environment 100 times with random actions sampled from the action space, using the Gym interface, resetting it 10 times. """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Grab all configs and choose one to loadog_config_path=og.example_config_pathavailable_configs=sorted([fforfinos.listdir(og_config_path)if(notfolder_is_hidden(f)andos.path.isfile(os.path.join(og_config_path,f)))])config_id=choose_from_options(options=available_configs,name="config file",random_selection=random_selection)logging.info("Using config file "+config_id)config_filename=os.path.join(og.example_config_path,config_id)config=yaml.load(open(config_filename,"r"),Loader=yaml.FullLoader)load_options={"Quick":"Only load the building assets (i.e.: the floors, walls, ceilings)","Full":"Load all interactive objects in the scene",}load_mode=choose_from_options(options=load_options,name="load mode",random_selection=random_selection)ifload_mode=="Quick":config["scene"]["load_object_categories"]=["floors","walls","ceilings"]# Load the environmentenv=og.Environment(configs=config)max_iterations=10ifnotshort_execelse1forjinrange(max_iterations):logging.info("Resetting environment")env.reset()foriinrange(100):action=env.action_space.sample()forrobot_nameinaction.keys():action[robot_name]=action[robot_name]*0.05state,reward,done,info=env.step(action)ifdone:logging.info("Episode finished after {} timesteps".format(i+1))break# Always close the environment at the endenv.close()if__name__=="__main__":main()
🧑🏫 Learning
These examples showcase how OmniGibson can be used to train embodied AI agents.
Reinforcement Learning Demo
This demo is useful for...
Understanding how to hook up OmniGibson to an external algorithm
"""Example training code using stable-baselines3 PPO for one BEHAVIOR activity.Note that due to the sparsity of the reward, this training code will not converge and achieve task success.This only serves as a starting point that users can further build upon."""importargparseimportloggingimportos,time,cv2importomnigibsonasogfromomnigibsonimportexample_config_pathlog=logging.getLogger(__name__)try:importgymimporttorchasthimporttorch.nnasnnfromstable_baselines3importPPOfromstable_baselines3.common.evaluationimportevaluate_policyfromstable_baselines3.common.preprocessingimportmaybe_transposefromstable_baselines3.common.torch_layersimportBaseFeaturesExtractorfromstable_baselines3.common.utilsimportset_random_seedfromstable_baselines3.common.callbacksimportCallbackList,CheckpointCallback,EvalCallbackexceptModuleNotFoundError:log.error("stable-baselines3 is not installed. You would need to do: pip install stable-baselines3")exit(1)classCustomCombinedExtractor(BaseFeaturesExtractor):def__init__(self,observation_space:gym.spaces.Dict):# We do not know features-dim here before going over all the items,# so put something dummy for now. PyTorch requires callingsuper(CustomCombinedExtractor,self).__init__(observation_space,features_dim=1)self.debug_length=10self.debug_mode=Trueextractors={}self.step_index=0self.img_save_dir='img_save_dir'os.makedirs(self.img_save_dir,exist_ok=True)total_concat_size=0feature_size=128forkey,subspaceinobservation_space.spaces.items():ifkeyin["rgb","ins_seg"]:print(subspace.shape)n_input_channels=subspace.shape[2]# channel lastcnn=nn.Sequential(nn.Conv2d(n_input_channels,4,kernel_size=8,stride=4,padding=0),nn.ReLU(),nn.MaxPool2d(2),nn.Conv2d(4,8,kernel_size=4,stride=2,padding=0),nn.ReLU(),nn.MaxPool2d(2),nn.Conv2d(8,4,kernel_size=3,stride=1,padding=0),nn.ReLU(),nn.Flatten(),)test_tensor=th.zeros([subspace.shape[2],subspace.shape[0],subspace.shape[1]])withth.no_grad():n_flatten=cnn(test_tensor[None]).shape[1]fc=nn.Sequential(nn.Linear(n_flatten,feature_size),nn.ReLU())extractors[key]=nn.Sequential(cnn,fc)elifkeyin["accum_reward",'obj_joint']:extractors[key]=nn.Sequential(nn.Linear(subspace.shape[0],feature_size))else:continuetotal_concat_size+=feature_sizeself.extractors=nn.ModuleDict(extractors)# Update the features dim manuallyself._features_dim=total_concat_sizedefforward(self,observations)->th.Tensor:encoded_tensor_list=[]self.step_index+=1# self.extractors contain nn.Modules that do all the processing.forkey,extractorinself.extractors.items():ifkeyin["rgb",]:ifself.debug_mode:cv2.imwrite(os.path.join(self.img_save_dir,'{0:06d}.png'.format(self.step_index%self.debug_length)),cv2.cvtColor((observations[key][0].cpu().numpy()*255).astype('uint8'),cv2.COLOR_RGB2BGR))observations[key]=observations[key].permute((0,3,1,2))elifkeyin["ins_seg"]:observations[key]=observations[key].permute((0,3,1,2))/500.elifkeyin['accum_reward','obj_joint']:iflen(observations[key].shape)==3:observations[key]=observations[key].squeeze(-1)# [:, :, 0]else:continueencoded_tensor_list.append(extractor(observations[key]))feature=th.cat(encoded_tensor_list,dim=1)returnfeaturedefmain():# Parse argsparser=argparse.ArgumentParser(description="Train or evaluate a PPO agent in BEHAVIOR")parser.add_argument("--config",type=str,default=f"{example_config_path}/fetch_behavior.yaml",help="Absolute path to desired OmniGibson environment config to load",)parser.add_argument("--checkpoint",type=str,default=None,help="Absolute path to desired PPO checkpoint to load for evaluation",)parser.add_argument("--eval",action="store_true",help="If set, will evaluate the PPO agent found from --checkpoint",)args=parser.parse_args()tensorboard_log_dir=os.path.join("log_dir",time.strftime("%Y%m%d-%H%M%S"))os.makedirs(tensorboard_log_dir,exist_ok=True)prefix=''seed=0env=og.Environment(configs=args.config,action_timestep=1/60.,physics_timestep=1/60.)# If we're evaluating, hide the ceilings and enable camera teleoperation so the user can easily# visualize the rollouts dynamicallyifargs.eval:ceiling=env.scene.object_registry("name","ceilings")ceiling.visible=Falseog.sim.enable_viewer_camera_teleoperation()# Set the setset_random_seed(seed)env.reset()policy_kwargs=dict(features_extractor_class=CustomCombinedExtractor,)os.makedirs(tensorboard_log_dir,exist_ok=True)ifargs.eval:assertargs.checkpointisnotNone,"If evaluating a PPO policy, @checkpoint argument must be specified!"model=PPO.load(args.checkpoint)print("Starting evaluation...")mean_reward,std_reward=evaluate_policy(model,env,n_eval_episodes=50)print("Finished evaluation!")log.info(f"Mean reward: {mean_reward} +/- {std_reward:.2f}")else:model=PPO("MultiInputPolicy",env,verbose=1,tensorboard_log=tensorboard_log_dir,policy_kwargs=policy_kwargs,n_steps=20*10,batch_size=8,device='cuda',)checkpoint_callback=CheckpointCallback(save_freq=1000,save_path=tensorboard_log_dir,name_prefix=prefix)log.debug(model.policy)print(model)print("Starting training...")model.learn(total_timesteps=10000000,callback=checkpoint_callback,eval_env=env,eval_freq=1000,n_eval_episodes=20)print("Finished training!")if__name__=="__main__":main()
🏔️ Scenes
These examples showcase how to leverage OmniGibson's large-scale, diverse scenes shipped with the BEHAVIOR dataset.
Scene Selector Demo
This demo is useful for...
Understanding how to load a scene into OmniGibson
Accessing all BEHAVIOR dataset scenes
python-momnigibson.examples.scenes.scene_selector
This demo lets you choose a scene from the BEHAVIOR dataset, loads it along with a Turtlebot robot, and cycles the resulting environment periodically.
importloggingimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.utils.asset_utilsimportget_available_g_scenes,get_available_og_scenesfromomnigibson.utils.ui_utilsimportchoose_from_options# Don't use GPU dynamics and Use flatcache for performance boostgm.USE_GPU_DYNAMICS=Falsegm.ENABLE_FLATCACHE=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select any available non-interactive scene and loads a turtlebot into it. It steps the environment 100 times with random actions sampled from the action space, using the Gym interface, resetting it 10 times. """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Choose the scene type to loadscene_options={"InteractiveTraversableScene":"Procedurally generated scene with fully interactive objects","StaticTraversableScene":"Monolithic scene mesh with no interactive objects",}scene_type=choose_from_options(options=scene_options,name="scene type",random_selection=random_selection)# Choose the scene model to loadscenes=get_available_og_scenes()ifscene_type=="InteractiveTraversableScene"elseget_available_g_scenes()scene_model=choose_from_options(options=scenes,name="scene model",random_selection=random_selection)print(f"scene model: {scene_model}")cfg={"scene":{"type":scene_type,"scene_model":scene_model,},"robots":[{"type":"Turtlebot","obs_modalities":["scan","rgb","depth"],"action_type":"continuous","action_normalize":True,}],}# If the scene type is interactive, also check if we want to quick load or full load the sceneifscene_type=="InteractiveTraversableScene":load_options={"Quick":"Only load the building assets (i.e.: the floors, walls, ceilings)","Full":"Load all interactive objects in the scene",}load_mode=choose_from_options(options=load_options,name="load mode",random_selection=random_selection)ifload_mode=="Quick":cfg["scene"]["load_object_categories"]=["floors","walls","ceilings"]# Load the environmentenv=og.Environment(configs=cfg)# Allow user to move camera more easilyifnotheadless:og.sim.enable_viewer_camera_teleoperation()# Run a simple loop and reset periodicallymax_iterations=10ifnotshort_execelse1forjinrange(max_iterations):logging.info("Resetting environment")env.reset()foriinrange(100):action=env.action_space.sample()state,reward,done,info=env.step(action)ifdone:logging.info("Episode finished after {} timesteps".format(i+1))break# Always close the environment at the endenv.close()if__name__=="__main__":main()
Traversability Map Demo
This demo is useful for...
Understanding how to leverage traversability map information from BEHAVIOR dataset scenes
importloggingimportosimportcv2importmatplotlib.pyplotaspltimportnumpyasnpfromPILimportImagefromomnigibson.utils.asset_utilsimportget_og_scene_path,get_available_og_scenesfromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" Traversable map demo Loads the floor plan and obstacles for the requested scene, and overlays them in a visual figure such that the highlighted area reflects the traversable (free-space) area """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)scenes=get_available_og_scenes()scene_model=choose_from_options(options=scenes,name="scene model",random_selection=random_selection)print(f"Generating traversability map for scene {scene_model}")trav_map_size=200trav_map_erosion=2trav_map=Image.open(os.path.join(get_og_scene_path(scene_model),"layout","floor_trav_0.png"))trav_map=np.array(trav_map.resize((trav_map_size,trav_map_size)))trav_map=cv2.erode(trav_map,np.ones((trav_map_erosion,trav_map_erosion)))ifnotheadless:plt.figure(figsize=(12,12))plt.imshow(trav_map)plt.title(f"Traversable area of {scene_model} scene")ifnotheadless:plt.show()if__name__=="__main__":main()
🎾 Objects
These examples showcase how to leverage objects in OmniGibson.
Load Object Demo
This demo is useful for...
Understanding how to load an object into OmniGibson
Accessing all BEHAVIOR dataset asset categories and models
importloggingimportnumpyasnpfromcollectionsimportOrderedDictimportomnigibsonasogfromomnigibson.utils.asset_utilsimport(get_all_object_categories,get_og_avg_category_specs,get_object_models_of_category,)fromomnigibson.utils.ui_utilsimportchoose_from_optionsdefmain(random_selection=False,headless=False,short_exec=False):""" This demo shows how to load any scaled objects from the iG object model dataset The user selects an object model to load The objects can be loaded into an empty scene, an interactive scene (iG) or a static scene (Gibson) The example also shows how to use the Environment API or directly the Simulator API, loading objects and robots and executing actions """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)scene_options=["Scene","InteractiveTraversableScene","StaticTraversableScene"]scene_type=choose_from_options(options=scene_options,name="scene type",random_selection=random_selection)# -- Choose the object to load --# Select a category to loadavailable_obj_categories=get_all_object_categories()obj_category=choose_from_options(options=available_obj_categories,name="object category",random_selection=random_selection)# Select a model to loadavailable_obj_models=get_object_models_of_category(obj_category)obj_model=choose_from_options(options=available_obj_models,name="object model",random_selection=random_selection)# Load the specs of the object categories, e.g., common scaling factoravg_category_spec=get_og_avg_category_specs()# Create and load this object into the simulatorobj_cfg=OrderedDict(type="DatasetObject",name="obj",category=obj_category,model=obj_model,bounding_box=avg_category_spec.get(obj_category),fit_avg_dim_volume=True,position=[0.5,-0.5,1.01],)cfg={"scene":{"type":scene_type,},"objects":[obj_cfg],}ifscene_type=="InteractiveTraversableScene":cfg["scene"]["scene_model"]="Rs_int"elifscene_type=="StaticTraversableScene":cfg["scene"]["scene_model"]="Adrian"# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Step through the environmentmax_steps=100ifshort_execelse10000foriinrange(max_steps):env.step(np.array([]))# Always close the environment at the endenv.close()if__name__=="__main__":main()
Object Visualizer Demo
This demo is useful for...
Viewing objects' textures as rendered in OmniGibson
Viewing articulated objects' range of motion
Understanding how to reference object instances from the environment
Understanding how to set object poses and joint states
This demo lets you choose a specific object from the BEHAVIOR dataset, and rotates the object in-place. If the object is articulated, it additionally moves its joints through its full range of motion.
importargparseimportloggingfromcollectionsimportOrderedDictimportnumpyasnpimportomnigibsonasogfromomnigibson.utils.asset_utilsimport(get_all_object_categories,get_object_models_of_category,)fromomnigibson.utils.ui_utilsimportchoose_from_optionsimportomnigibson.utils.transform_utilsasTdefmain(random_selection=False,headless=False,short_exec=False):""" Visualizes object as specified by its USD path, @usd_path. If None if specified, will instead result in an object selection from OmniGibson's object dataset """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Assuming that if random_selection=True, headless=True, short_exec=True, we are calling it from tests and we# do not want to parse args (it would fail because the calling function is pytest "testfile.py")usd_path=Noneifnot(random_selectionandheadlessandshort_exec):parser=argparse.ArgumentParser()parser.add_argument("--usd_path",default=None,help="USD Model to load",)args=parser.parse_args()usd_path=args.usd_path# Define objects to loadlight0_cfg=OrderedDict(type="LightObject",light_type="Sphere",name="sphere_light0",radius=0.01,intensity=1e5,position=[-2.0,-2.0,2.0],)light1_cfg=OrderedDict(type="LightObject",light_type="Sphere",name="sphere_light1",radius=0.01,intensity=1e5,position=[-2.0,2.0,2.0],)# Make sure we have a valid usd pathifusd_pathisNone:# Select a category to loadavailable_obj_categories=get_all_object_categories()obj_category=choose_from_options(options=available_obj_categories,name="object category",random_selection=random_selection)# Select a model to loadavailable_obj_models=get_object_models_of_category(obj_category)obj_model=choose_from_options(options=available_obj_models,name="object model",random_selection=random_selection)kwargs={"type":"DatasetObject","category":obj_category,"model":obj_model,}else:kwargs={"type":"USDObject","usd_path":usd_path,}# Import the desired objectobj_cfg=OrderedDict(**kwargs,name="obj",usd_path=usd_path,visual_only=True,position=[0,0,10.0],)# Create the scene config to load -- empty scenecfg={"scene":{"type":"Scene",},"objects":[light0_cfg,light1_cfg,obj_cfg],}# Create the environmentenv=og.Environment(configs=cfg)# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=np.array([-0.00913503,-1.95750906,1.36407314]),orientation=np.array([0.6350064,0.,0.,0.77250687]),)# Grab the object referencesobj=env.scene.object_registry("name","obj")# Standardize the scale of the object so it fits in a [1,1,1] box -- note that we have to stop the simulator# in order to set the scaleextents=obj.aabb_extentog.sim.stop()obj.scale=(np.ones(3)/extents).min()og.sim.play()env.step(np.array([]))# Move the object so that its center is at [0, 0, 1]center_offset=obj.aabb_center-obj.get_position()+np.array([0,0,1.0])obj.set_position(center_offset)# Allow the user to easily move the camera aroundog.sim.enable_viewer_camera_teleoperation()# Rotate the object in placesteps_per_rotate=360steps_per_joint=steps_per_rotate/10max_steps=100ifshort_execelse10000foriinrange(max_steps):z_angle=(2*np.pi*(i%steps_per_rotate)/steps_per_rotate)quat=T.euler2quat(np.array([0,0,z_angle]))pos=T.quat2mat(quat)@center_offsetifobj.n_dof>0:frac=(i%steps_per_joint)/steps_per_jointj_frac=-1.0+2.0*fracif(i//steps_per_joint)%2==0else1.0-2.0*fracobj.set_joint_positions(positions=j_frac*np.ones(obj.n_dof),normalized=True,target=False)obj.keep_still()obj.set_position_orientation(position=pos,orientation=quat)env.step(np.array([]))if__name__=="__main__":main()
Highlight Object
This demo is useful for...
Understanding how to highlight individual objects within a cluttered scene
Understanding how to access groups of objects from the environment
This demo lets you choose a specific object from the BEHAVIOR dataset, and rotates the object in-place. If the object is articulated, it additionally moves its joints through its full range of motion.
"""Generate example top-down segmentation map via renderer"""importloggingimportnumpyasnpimportomnigibsonasogdefmain(random_selection=False,headless=False,short_exec=False):""" Highlights visually all object instances of some given category and then removes the highlighting It also demonstrates how to apply an action on all instances of objects of a given category ONLY WORKS WITH OPTIMIZED RENDERING (not on Mac) """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Create the scene config to load -- empty scenecfg={"scene":{"type":"InteractiveTraversableScene","scene_model":"Rs_int",}}# Create the environmentenv=og.Environment(configs=cfg)# Grab all window objectswindows=og.sim.scene.object_registry("category","window")# Step environment while toggling window highlightingi=0highlighted=Falsemax_steps=-1ifnotshort_execelse1000whilei!=max_steps:env.step(np.array([]))ifi%50==0:highlighted=nothighlightedlogging.info(f"Toggling window highlight to: {highlighted}")forwindowinwindows:# Note that this property is R/W!window.highlighted=highlightedi+=1# Always close the environment at the endenv.close()if__name__=="__main__":main()
Draw Object Bounding Box Demo
This demo is useful for...
Understanding how to access observations from a GymObservable object (1)
Understanding how to access objects' bounding box information
Understanding how to dynamically modify vision modalities
This demo loads a door object and banana object, and partially obscures the banana with the door. It generates both "loose" and "tight" bounding boxes (where the latter respects occlusions) for both objects, and dumps them to an image on disk.
importloggingimportmatplotlib.pyplotaspltfromcollectionsimportOrderedDictimportnumpyasnpimportomnigibsonasogfromomni.isaac.synthetic_utils.visualizationimportcolorize_bboxesdefmain(random_selection=False,headless=False,short_exec=False):""" Shows how to obtain the bounding box of an articulated object. Draws the bounding box around the loaded object, a cabinet, and writes the visualized image to disk at the current directory named 'bbox_2d_[loose / tight]_img.png'. NOTE: In the GUI, bounding boxes can be natively viewed by clicking on the sensor ((*)) icon at the top, and then selecting the appropriate bounding box modalities, and clicking "Show". See: https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_replicator/visualization.html#the-visualizer """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Specify objects to loadbanana_cfg=OrderedDict(type="DatasetObject",name="banana",category="banana",model="09_0",scale=[3.0,5.0,2.0],position=[-0.906661,-0.545106,0.136824],orientation=[0,0,0.76040583,-0.6494482],)door_cfg=OrderedDict(type="DatasetObject",name="door",category="door",model="8930",position=[-2.0,0,0.70000001],orientation=[0,0,-0.38268343,0.92387953],)# Create the scene config to load -- empty scene with a few objectscfg={"scene":{"type":"Scene",},"objects":[banana_cfg,door_cfg],}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Set camera to appropriate viewing posecam=og.sim.viewer_cameracam.set_position_orientation(position=np.array([-4.62785,-0.418575,0.933943]),orientation=np.array([0.52196595,-0.4231939,-0.46640436,0.5752612]),)# Add bounding boxes to camera sensorbbox_modalities=["bbox_3d","bbox_2d_loose","bbox_2d_tight"]forbbox_modalityinbbox_modalities:cam.add_modality(bbox_modality)# Take a few steps to let objects settleforiinrange(100):env.step(np.array([]))# Grab observations from viewer camera and write them to diskobs=cam.get_obs()forbbox_modalityinbbox_modalities:# Print out each of the modalitiesprint(f"Observation modality {bbox_modality}:")print(obs[bbox_modality])# Also write the 2d loose bounding box to diskif"3d"notinbbox_modality:colorized_img=colorize_bboxes(bboxes_2d_data=obs[bbox_modality],bboxes_2d_rgb=obs["rgb"],num_channels=4)plt.imsave(f"{bbox_modality}_img.png",colorized_img)# Always close environment down at endenv.close()if__name__=="__main__":main()
🌡️ Object States
These examples showcase OmniGibson's powerful object states functionality, which captures both individual and relational kinematic and non-kinematic states.
Attachment Demo
This demo is useful for...
Understanding how to leverage the Attached state
Understanding how to enable objects to be attachable
importnumpyasnpimportyamlfromcollectionsimportOrderedDictimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.utils.ui_utilsimportchoose_from_optionsfromomnigibson.object_states.attachmentimportAttachmentTypefromomnigibson.object_statesimportAttached# Make sure object states are enabled and global contact reporting is enabledgm.ENABLE_OBJECT_STATES=Truegm.ENABLE_GLOBAL_CONTACT_REPORTING=Truedefsetup_scene_for_abilities(abilities1,abilities2):# Make sure simulation is stoppedog.sim.stop()cfg=yaml.load(open(f"{og.example_config_path}/default_cfg.yaml","r"),Loader=yaml.FullLoader)# Add objects that we want to createlight_cfg=OrderedDict(type="LightObject",name="light",light_type="Sphere",radius=0.01,intensity=5000,position=[0,0,1.0],)apple_cfg=OrderedDict(type="DatasetObject",name="apple",category="apple",model="00_0",abilities=abilities1,position=[0,0,0.04],)fridge_cfg=OrderedDict(type="DatasetObject",name="fridge",category="fridge",model="12252",abilities=abilities2,position=[2,0,0.8],)cfg["objects"]=[light_cfg,apple_cfg,fridge_cfg]# Recreate the environment (this will automatically override the old environment instance)# We load the default config, which is simply an Scene with no objects loaded in by defaultenv=og.Environment(configs=cfg)# Grab apple and fridgeapple=env.scene.object_registry("name","apple")fridge=env.scene.object_registry("name","fridge")# Set viewer camera poseog.sim.viewer_camera.set_position_orientation(position=np.array([-0.972333,-2.0899,1.0654]),orientation=np.array([0.60682517,-0.24656188,-0.28443909,0.70004632]),)# Take a few stepsfor_inrange(5):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)returnenv,apple,fridgedefdemo_sticky_attachment():####################################################################################### Sticky attachment# can attach if touching and at least one object has the sticky attachment type.######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.STICKY}},abilities2={})assertAttachedinobj1.states# Obj1 moves towards obj2 and they are attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):og.sim.step()assertobj1.states[Attached].get_value(obj2)# Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.obj1.set_linear_velocity(velocity=np.array([10.0,0,50.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)# Unattach obj1 and obj2.obj1.states[Attached].set_value(obj2,False)assertnotobj1.states[Attached].get_value(obj2)# Obj1 moves away from obj2.# obj1.set_linear_velocity(velocity=np.array([-2.0, 0, 1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)defdemo_symmetric_attachment():####################################################################################### Symmetric attachment# can attach if touching and both objects have the symmetric attachment type and the same attachment category######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.SYMMETRIC,"attachment_category":"magnet"}},abilities2={"attachable":{"attachment_type":AttachmentType.SYMMETRIC,"attachment_category":"magnet"}})assertAttachedinobj1.statesassertAttachedinobj2.states# Obj1 moves towards obj2 and they are attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)assertobj1.states[Attached].get_value(obj2)assertobj2.states[Attached].get_value(obj1)# Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.obj1.set_linear_velocity(velocity=np.array([10.0,0,50.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)# Unattach obj1 and obj2.obj1.states[Attached].set_value(obj2,False)assertnotobj1.states[Attached].get_value(obj2)assertnotobj2.states[Attached].get_value(obj1)# Obj1 moves away from obj2.obj1.set_linear_velocity(velocity=np.array([-2.0,0,1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)defdemo_failed_symmetric_attachment_missing_symmetric():####################################################################################### Symmetric attachment - FAIL because only 1 object has the symmetric attachment type# can attach if touching and both objects have the symmetric attachment type and the same attachment category######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.SYMMETRIC,"attachment_category":"magnet"}},abilities2={})assertAttachedinobj1.statesassertAttachednotinobj2.states# Obj1 moves towards obj2 but they are NOT attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)assertnotobj1.states[Attached].get_value(obj2)# Obj1 moves away from obj2.obj1.set_linear_velocity(velocity=np.array([-2.0,0,1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)defdemo_failed_symmetric_attachment_diff_categories():####################################################################################### Symmetric attachment - FAIL because the two objects have different attachment category# can attach if touching and both objects have the symmetric attachment type and the same attachment category######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.SYMMETRIC,"attachment_category":"magnet"}},abilities2={"attachable":{"attachment_type":AttachmentType.SYMMETRIC,"attachment_category":"velcro"}})assertAttachedinobj1.statesassertAttachedinobj2.states# Obj1 moves towards obj2 but they are NOT attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)assertnotobj1.states[Attached].get_value(obj2)# Obj1 moves away from obj2.obj1.set_linear_velocity(velocity=np.array([-2.0,0,1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)defdemo_male_female_attachment():####################################################################################### Male / female attachment# can attach if touching, both have the opposite end (male / female) but the same attachment category######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.MALE,"attachment_category":"usb"}},abilities2={"attachable":{"attachment_type":AttachmentType.FEMALE,"attachment_category":"usb"}})assertAttachedinobj1.statesassertAttachedinobj2.states# Obj1 moves towards obj2 and they are attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)assertobj1.states[Attached].get_value(obj2)assertobj2.states[Attached].get_value(obj1)# Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.obj1.set_linear_velocity(velocity=np.array([10.0,0,50.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)# Unattach obj1 and obj2.obj1.states[Attached].set_value(obj2,False)assertnotobj1.states[Attached].get_value(obj2)assertnotobj2.states[Attached].get_value(obj1)# Obj1 moves away from obj2.obj1.set_linear_velocity(velocity=np.array([-2.0,0,1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)defdemo_failed_male_female_attachment_missing_opposite_end():####################################################################################### Male / female attachment - FAIL because both objects are male.# can attach if touching, both have the opposite end (male / female) but the same attachment category######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.MALE,"attachment_category":"usb"}},abilities2={"attachable":{"attachment_type":AttachmentType.MALE,"attachment_category":"usb"}})assertAttachedinobj1.statesassertAttachedinobj2.states# Obj1 moves towards obj2 and but they are NOT attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)assertnotobj1.states[Attached].get_value(obj2)# Obj1 moves away from obj2.obj1.set_linear_velocity(velocity=np.array([-2.0,0,1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)defdemo_failed_male_female_attachment_diff_categories():####################################################################################### Male / female attachment - FAIL because the two objects have different attachment category# can attach if touching, both have the opposite end (male / female) but the same attachment category######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.MALE,"attachment_category":"usb"}},abilities2={"attachable":{"attachment_type":AttachmentType.FEMALE,"attachment_category":"hdmi"}})assertAttachedinobj1.statesassertAttachedinobj2.states# Obj1 moves towards obj2 and but they are NOT attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)assertnotobj1.states[Attached].get_value(obj2)# Obj1 moves away from obj2.obj1.set_linear_velocity(velocity=np.array([-2.0,0,1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)defdemo_male_female_attachment_dump_load():####################################################################################### Male / female attachment with dump_state and load_state# can attach if touching, both have the opposite end (male / female) but the same attachment category######################################################################################env,obj1,obj2=setup_scene_for_abilities(abilities1={"attachable":{"attachment_type":AttachmentType.MALE,"attachment_category":"usb"}},abilities2={"attachable":{"attachment_type":AttachmentType.FEMALE,"attachment_category":"usb"}})assertAttachedinobj1.statesassertAttachedinobj2.states# Obj1 moves towards obj2 and they are attached together.obj1.set_linear_velocity(velocity=np.array([3.0,0,3.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)assertobj1.states[Attached].get_value(obj2)assertobj2.states[Attached].get_value(obj1)# Save the state.state=og.sim.dump_state()# Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.obj1.set_linear_velocity(velocity=np.array([10.0,0,50.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)# Unattach obj1 and obj2.obj1.states[Attached].set_value(obj2,False)assertnotobj1.states[Attached].get_value(obj2)assertnotobj2.states[Attached].get_value(obj1)# Obj1 moves away from obj2.obj1.set_linear_velocity(velocity=np.array([-2.0,0,1.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)# Load the state where the two objects are attached.og.sim.load_state(state)# Attached state should be restored correctlyassertobj1.states[Attached].get_value(obj2)assertobj2.states[Attached].get_value(obj1)# Apply a large force to obj1 but the two objects cannot move much because obj2 is heavy.obj1.set_linear_velocity(velocity=np.array([10.0,0,50.0]))foriinrange(100):env.step(np.array([]))# empty action array since action space is 0 (no robots in the env)demo_names_to_demos={"demo_sticky_attachment":demo_sticky_attachment,"demo_symmetric_attachment":demo_symmetric_attachment,"demo_failed_symmetric_attachment_missing_symmetric":demo_failed_symmetric_attachment_missing_symmetric,"demo_failed_symmetric_attachment_diff_categories":demo_failed_symmetric_attachment_diff_categories,"demo_male_female_attachment":demo_male_female_attachment,"demo_failed_male_female_attachment_missing_opposite_end":demo_failed_male_female_attachment_missing_opposite_end,"demo_failed_male_female_attachment_diff_categories":demo_failed_male_female_attachment_diff_categories,"demo_male_female_attachment_dump_load":demo_male_female_attachment_dump_load,}defmain(random_selection=False,headless=False,short_exec=False):# Loop indefinitely and choose different examples to runforiinrange(len(demo_names_to_demos)):demo_name=choose_from_options(options=list(demo_names_to_demos.keys()),name="attachment demo")# Run the demodemo_names_to_demos[demo_name]()if__name__=="__main__":main()
Cleaning Demo
This demo is useful for...
Understanding how WaterSource-enabled objects can be toggled on to generate water particles
Understanding how to programmatically generate Stain and Dusty particles on objects
Understanding how particles can be removed via a ParticleRemover object
This demo first loads a basic cube object with the ability to remove particles into fully populated scene, and then spawns dust and stain particles on various objects and turns on the sink. You can then move (1) the cube object around to first absorb water particles so that it's "soaked" with water, and then can drag it across objects with dust or stain particles to remove them.
Manipulate the object by holding down Shift and then Left-click + Drag!
importloggingimportnumpyasnpimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgmfromomnigibson.systemsimportDustSystem,StainSystem,WaterSystemfromomnigibson.utils.constantsimportParticleModifyMethod# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of a cleaning task Loads an interactive scene and sets all object surface to be dirty Loads also a cleaning tool that can be soaked in water and used to clean objects if moved manually """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Create the scene config to load -- Rs_int with only a few object categories loaded, as# well as a custom block object that will be used as a cleaning tooldefcheck_water_saturation(obj):returnobj.states[object_states.Saturated].get_value(WaterSystem)cfg={"scene":{"type":"InteractiveTraversableScene","scene_model":"Rs_int","load_object_categories":["floors","walls","ceilings","breakfast_table","bottom_cabinet","sink","stove","fridge","window"],},"objects":[# A cleaning tool (cuboid) with the ability to be saturated and remove stain, dust, and water particles{"type":"PrimitiveObject","name":"block","primitive_type":"Cube","scale":[0.15,0.1,0.03],"rgba":[0.5,1.0,1.0,1.0],"abilities":{"saturable":{},"particleRemover":{"method":ParticleModifyMethod.ADJACENCY,"conditions":{# For a specific particle system, this specifies what conditions are required in order for the# particle applier / remover to apply / remover particles associated with that system# The list should contain functions with signature condition() --> bool,# where True means the condition is satisfied# In this case, we only allow our cleaning tool to remove stains and dust particles if# the object is saturated with water, i.e.: it's "soaked" with water particlesStainSystem:[check_water_saturation],DustSystem:[check_water_saturation],WaterSystem:[],},},},"position":[-1.4,3.0,1.5],},],}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Set everything that can go dirty and activate the water sourcesdusty_objects=env.scene.object_registry("category","breakfast_table")stained_objects=env.scene.object_registry("category","bottom_cabinet")water_source_objects=env.scene.get_objects_with_state(object_states.WaterSource)forobjindusty_objects:logging.info(f"Setting object {obj.name} to be Dusty")obj.states[object_states.Covered].set_value(DustSystem,True)forobjinstained_objects:logging.info(f"Setting object {obj.name} to be Stained")obj.states[object_states.Covered].set_value(StainSystem,True)forobjinwater_source_objects:ifobject_states.ToggledOninobj.states:logging.info(f"Setting water source object {obj} to be ToggledOn")obj.states[object_states.ToggledOn].set_value(True)# Set the camera to be in a good positionog.sim.viewer_camera.set_position_orientation(position=np.array([-0.825556,2.42499,1.04104]),orientation=np.array([0.56919735,0.09896035,0.13981109,0.80416049]),)max_steps=-1ifnotshort_execelse1000step=0try:foriinrange(200):env.step(np.array([]))whilestep!=max_steps:env.step(np.array([]))# Empty action since no robots in the environmentstep+=1finally:# Always close environment at the endenv.close()if__name__=="__main__":main()
Simple Cleaning Demo
This demo is useful for...
Understanding how WaterSource-enabled objects can be toggled on to generate water particles
Understanding how to programmatically generate Stain and Dusty particles on objects
Understanding how particles can be removed via a ParticleRemover object
This demo is similar in functionality to the full cleaning demo, but instead of loading a fully populated scene only a select few objects are loaded. In this case, the loaded scrub_brush object is the ParticleRemover object. As before, you can move (1) the brush around to first absorb water particles so that it's "soaked" with water, and then can drag it across objects with dust or stain particles to remove them.
Manipulate the object by holding down Shift and then Left-click + Drag!
importloggingfromcollectionsimportOrderedDictimportnumpyasnpimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgmfromomnigibson.systemsimportDustSystem,StainSystem,WaterSystemfromomnigibson.utils.constantsimportParticleModifyMethod# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of a cleaning task that resets after everything has been cleaned Loads an empty scene with a sink, a dusty table and a dirty and stained bowl, and a cleaning tool If everything is cleaned, or after N steps, the scene resets to the initial state """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Create the scene config to load -- empty scenecfg={"scene":{"type":"Scene",}}# Define objects to load into the environmentsink_cfg=OrderedDict(type="DatasetObject",name="sink",category="sink",model="sink_1",scale=[0.8,0.8,0.8],abilities={"toggleable":{},"waterSource":{},"waterSink":{}},position=[-0.7,0,0.53],)defcheck_water_saturation(obj):returnobj.states[object_states.Saturated].get_value(WaterSystem)brush_cfg=OrderedDict(type="DatasetObject",name="brush",category="scrub_brush",model="scrub_brush_000",avg_obj_dims={"size":[0.1,0.1,0.1],"density":67.0},fit_avg_dim_volume=True,position=[1.0,0,0.4],abilities={"saturable":{},"particleRemover":{"method":ParticleModifyMethod.ADJACENCY,"conditions":{# For a specific particle system, this specifies what conditions are required in order for the# particle applier / remover to apply / remover particles associated with that system# The list should contain functions with signature condition() --> bool,# where True means the condition is satisfied# In this case, we only allow our cleaning tool to remove stains and dust particles if# the object is saturated with water, i.e.: it's "soaked" with water particlesStainSystem:[check_water_saturation],DustSystem:[check_water_saturation],WaterSystem:[],},},},)# Desk that's dustydesk_cfg=OrderedDict(type="DatasetObject",name="desk",category="breakfast_table",model="19203",scale=[0.8,0.8,0.8],position=[1.0,0,0.48],)# Bowl with stainsbowl_cfg=OrderedDict(type="DatasetObject",name="bowl",category="bowl",model="68_0",scale=np.array([0.8,0.8,0.8]),position=[-1.0,0,0.48],)cfg["objects"]=[sink_cfg,brush_cfg,desk_cfg,bowl_cfg]# Create the environment!env=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Set camera to ideal angle for viewing objectsog.sim.viewer_camera.set_position_orientation(position=np.array([-0.782289,-0.633009,1.4475]),orientation=np.array([0.48871723,-0.24618907,-0.37654978,0.74750028]),)# Take a few steps to let the objects settle, and then sanity check the initial statefor_inrange(10):env.step(np.array([]))# Empty action since no robots are in the scenesink=env.scene.object_registry("name","sink")brush=env.scene.object_registry("name","brush")desk=env.scene.object_registry("name","desk")bowl=env.scene.object_registry("name","bowl")assertsink.states[object_states.ToggledOn].set_value(True)assertdesk.states[object_states.Covered].set_value(DustSystem,True)assertbowl.states[object_states.OnTop].set_value(desk,True,use_ray_casting_method=True)assertbrush.states[object_states.OnTop].set_value(desk,True,use_ray_casting_method=True)assertbowl.states[object_states.Covered].set_value(StainSystem,True)# Take a step, and save the stateenv.step(np.array([]))initial_state=og.sim.dump_state()# Main simulation loop.max_steps=1000max_iterations=-1ifnotshort_execelse1iteration=0try:whileiteration!=max_iterations:# Keep stepping until table or bowl are clean, or we reach 1000 stepssteps=0while(desk.states[object_states.Covered].get_value(DustSystem)andbowl.states[object_states.Covered].get_value(StainSystem)andsteps!=max_steps):steps+=1env.step(np.array([]))logging.info(f"Step {steps}")ifnotdesk.states[object_states.Covered].get_value(DustSystem):logging.info("Reset because Table cleaned")elifnotbowl.states[object_states.Covered].get_value(StainSystem):logging.info("Reset because Bowl cleaned")else:logging.info("Reset because max steps")# Reset to the initial stateog.sim.load_state(initial_state)iteration+=1finally:# Always shut down environment at the endenv.close()if__name__=="__main__":main()
Folded Demo
This demo is useful for...
Understanding how to load a softbody (cloth) version of a BEHAVIOR dataset object
Understanding how to enable cloth objects to be foldable
Understanding the current heuristics used for gauging a cloth's "foldness"
This is a beta feature
Our Folded state is still a WIP, and should not be used to accurately capture the full semantics of folding cloth-enabled objects.
fromomnigibson.utils.constantsimportPrimTypefromomnigibson.object_statesimportFoldedfromomnigibson.macrosimportgmimportloggingimportomnigibsonasog# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for cloth)gm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of cloth objects that can potentially be folded. """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Create the scene config to load -- empty scene + custom cloth objectcfg={"scene":{"type":"Scene",},"objects":[{"type":"DatasetObject","name":"carpet","category":"carpet","model":"carpet_0","prim_type":PrimType.CLOTH,"abilities":{"foldable":{}},"position":[0,0,0.5],},{"type":"DatasetObject","name":"dishtowel","category":"dishtowel","model":"Tag_Dishtowel_Basket_Weave_Red","prim_type":PrimType.CLOTH,"scale":5.0,"abilities":{"foldable":{}},"position":[1,1,0.5],},{"type":"DatasetObject","name":"shirt","category":"t-shirt","model":"t-shirt_000","prim_type":PrimType.CLOTH,"scale":0.05,"abilities":{"foldable":{}},"position":[-1,1,0.5],"orientation":[0.7071,0.,0.7071,0.],},],}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Grab object referencescarpet=env.scene.object_registry("name","carpet")dishtowel=env.scene.object_registry("name","dishtowel")shirt=env.scene.object_registry("name","shirt")max_steps=100ifshort_execelse-1steps=0# Criterion #1: the area of the convex hull of the projection of points onto the x-y plane should be reduced# Criterion #2: the diagonal of the convex hull of the projection of points onto the x-y plane should be reduced# Criterion #3: the face normals of the cloth should mostly point along the z-axiswhilesteps!=max_steps:og.sim.step()flag_area_reduction,flag_diagonal_reduction=carpet.states[Folded].check_projection_area_and_diagonal()flag_smoothness=carpet.states[Folded].check_smoothness()folded=flag_area_reductionandflag_diagonal_reductionandflag_smoothnessinfo='carpet: [folded] %d [A] %d [D] %d [S] %d'%(folded,flag_area_reduction,flag_diagonal_reduction,flag_smoothness)flag_area_reduction,flag_diagonal_reduction=dishtowel.states[Folded].check_projection_area_and_diagonal()flag_smoothness=dishtowel.states[Folded].check_smoothness()folded=flag_area_reductionandflag_diagonal_reductionandflag_smoothnessinfo+=" || dishtowel: [folded] %d [A] %d [D] %d [S] %d"%(folded,flag_area_reduction,flag_diagonal_reduction,flag_smoothness)flag_area_reduction,flag_diagonal_reduction=shirt.states[Folded].check_projection_area_and_diagonal()flag_smoothness=shirt.states[Folded].check_smoothness()folded=flag_area_reductionandflag_diagonal_reductionandflag_smoothnessinfo+=" || tshirt: [folded] %d [A] %d [D] %d [S] %d"%(folded,flag_area_reduction,flag_diagonal_reduction,flag_smoothness)print(info)steps+=1# Shut down env at the endenv.close()if__name__=="__main__":main()
Temperature Demo
This demo is useful for...
Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
Understanding how temperature changes are propagated to individual objects from individual heat sources or sinks
This demo loads in various heat sources and sinks, and places an apple within close proximity to each of them. As the environment steps, each apple's temperature is printed in real-time, showcasing OmniGibson's rudimentary temperature dynamics.
importloggingfromcollectionsimportOrderedDictimportnumpyasnpimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of temperature change Loads a stove, a microwave and an oven, all toggled on, and five frozen apples The user can move the apples to see them change from frozen, to normal temperature, to cooked and burnt This demo also shows how to load objects ToggledOn and how to set the initial temperature of an object """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Define specific objects we want to load in with the scene directlyobj_configs=[]# Lightobj_configs.append(OrderedDict(type="LightObject",light_type="Sphere",name="light",radius=0.01,intensity=1e5,position=[-2.0,-2.0,1.0],))# Stoveobj_configs.append(OrderedDict(type="DatasetObject",name="stove",category="stove",model="101943",position=[0,0,0.65],))# Microwaveobj_configs.append(OrderedDict(type="DatasetObject",name="microwave",category="microwave",model="7128",scale=0.25,position=[2.5,0,0.094],))# Ovenobj_configs.append(OrderedDict(type="DatasetObject",name="oven",category="oven",model="7120",position=[-1.25,0,0.80],))# Trayobj_configs.append(OrderedDict(type="DatasetObject",name="tray",category="tray",model="tray_000",scale=0.15,position=[0,0,1.24],))# Fridgeobj_configs.append(OrderedDict(type="DatasetObject",name="fridge",category="fridge",model="12252",abilities={"coldSource":{"temperature":-100.0,"requires_inside":True,}},position=[1.25,0,0.90],))# 5 Applesforiinrange(5):obj_configs.append(OrderedDict(type="DatasetObject",name=f"apple{i}",category="apple",model="00_0",position=[0,i*0.05,1.65],))# Create the scene config to load -- empty scene with desired objectscfg={"scene":{"type":"Scene",},"objects":obj_configs,}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Get reference to relevant objectsstove=env.scene.object_registry("name","stove")microwave=env.scene.object_registry("name","microwave")oven=env.scene.object_registry("name","oven")tray=env.scene.object_registry("name","tray")fridge=env.scene.object_registry("name","fridge")apples=list(env.scene.object_registry("category","apple"))# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=np.array([0.46938863,-3.97887141,1.64106008]),orientation=np.array([0.63311689,0.00127259,0.00155577,0.77405359]),)# Let objects settlefor_inrange(25):env.step(np.array([]))# Turn on all scene objectsstove.states[object_states.ToggledOn].set_value(True)microwave.states[object_states.ToggledOn].set_value(True)oven.states[object_states.ToggledOn].set_value(True)# Set initial temperature of the apples to -50 degrees Celsius, and move the apples to different objectsforappleinapples:apple.states[object_states.Temperature].set_value(-50)apples[0].states[object_states.Inside].set_value(oven,True,use_ray_casting_method=True)apples[1].set_position(stove.links["heat_source_link"].get_position()+np.array([0,0,0.1]))apples[2].states[object_states.OnTop].set_value(tray,True,use_ray_casting_method=True)apples[3].states[object_states.Inside].set_value(fridge,True,use_ray_casting_method=True)apples[4].states[object_states.Inside].set_value(microwave,True,use_ray_casting_method=True)steps=0max_steps=-1ifnotshort_execelse1000# Main recording looplocations=[f'{loc:>20}'forlocin["Inside oven","On stove","On tray","Inside fridge","Inside microwave"]]print()print(f"{'Apple location:':<20}",*locations)whilesteps!=max_steps:env.step(np.array([]))temps=[f"{apple.states[object_states.Temperature].get_value():>20.2f}"forappleinapples]print(f"{'Apple temperature:':<20}",*temps,end="\r")steps+=1# Always close env at the endenv.close()if__name__=="__main__":main()
Heat Source or Sink Demo
This demo is useful for...
Understanding how a heat source (or sink) is visualized in OmniGibson
Understanding how dynamic fire visuals are generated in real-time
importnumpyasnpimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain():# Create the scene config to load -- empty scene with a stove object addedcfg={"scene":{"type":"Scene",},"objects":[{"type":"DatasetObject","name":"stove","category":"stove","model":"101908","abilities":{"heatSource":{"requires_toggled_on":True},"toggleable":{},},"position":[0,0,0.4],}],}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Get reference to stove objectstove=env.scene.object_registry("name","stove")# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=np.array([-0.0792399,-1.30104,1.51981]),orientation=np.array([0.54897692,0.00110359,0.00168013,0.83583509]),)# Make sure necessary object states are included with the stoveassertobject_states.HeatSourceOrSinkinstove.statesassertobject_states.ToggledOninstove.states# Take a few steps so that visibility propagatesfor_inrange(5):env.step(np.array([]))# Heat source is off.print("Heat source is OFF.")heat_source_state,heat_source_position=stove.states[object_states.HeatSourceOrSink].get_value()assertnotheat_source_state# Toggle on stove, notify userinput("Heat source will now turn ON: Press ENTER to continue.")stove.states[object_states.ToggledOn].set_value(True)assertstove.states[object_states.ToggledOn].get_value()# Need to take a step to update the state.env.step(np.array([]))# Heat source is onheat_source_state,heat_source_position=stove.states[object_states.HeatSourceOrSink].get_value()assertheat_source_statefor_inrange(500):env.step(np.array([]))# Toggle off stove, notify userinput("Heat source will now turn OFF: Press ENTER to continue.")stove.states[object_states.ToggledOn].set_value(False)assertnotstove.states[object_states.ToggledOn].get_value()for_inrange(200):env.step(np.array([]))# Move stove, notify userinput("Heat source is now moving: Press ENTER to continue.")stove.set_position(np.array([0,1.0,0.4]))foriinrange(100):env.step(np.array([]))# Toggle on stove again, notify userinput("Heat source will now turn ON: Press ENTER to continue.")stove.states[object_states.ToggledOn].set_value(True)assertstove.states[object_states.ToggledOn].get_value()foriinrange(500):env.step(np.array([]))# Shutdown environment at endenv.close()if__name__=="__main__":main()
Heated Demo
This demo is useful for...
Understanding how temperature modifications can cause objects' visual changes
Understanding how dynamic steam visuals are generated in real-time
This demo loads in three bowls, and immediately sets their temperatures past their Heated threshold. Steam is generated in real-time from these objects, and then disappears once the temperature of the objects drops below their Heated threshold.
importnumpyasnpfromcollectionsimportOrderedDictimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain():# Define object configurations for objects to load -- we want to load a light and three bowlsobj_configs=[]obj_configs.append(OrderedDict(type="LightObject",light_type="Sphere",name="light",radius=0.01,intensity=1e5,position=[-2.0,-2.0,1.0],))fori,(scale,x)inenumerate(zip([0.5,1.0,2.0],[-0.6,0,0.8])):obj_configs.append(OrderedDict(type="DatasetObject",name=f"bowl{i}",category="bowl",model="68_0",scale=scale,abilities={"heatable":{}},position=[x,0,0.2],))# Create the scene config to load -- empty scene with light object and bowlscfg={"scene":{"type":"Scene",},"objects":obj_configs,}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=np.array([0.182103,-2.07295,0.14017]),orientation=np.array([0.77787037,0.00267566,0.00216149,0.62841535]),)# Grab reference to objects of relevanceobjs=list(env.scene.object_registry("category","bowl"))defreport_states(objs):forobjinobjs:print("="*20)print("object:",obj.name)print("temperature:",obj.states[object_states.Temperature].get_value())print("obj is heated:",obj.states[object_states.Heated].get_value())# Report default statesprint("==== Initial state ====")report_states(objs)# Notify user that we're about to heat the objectinput("Objects will be heated, and steam will slowly rise. Press ENTER to continue.")# Heated.forobjinobjs:obj.states[object_states.Temperature].set_value(50)env.step(np.array([]))report_states(objs)# Take a look at the steam effect.# After a while, objects will be below the Steam temperature threshold.print("==== Objects are now heated... ====")print()for_inrange(2000):env.step(np.array([]))# Also print temperaturestemps=[f"{obj.states[object_states.Temperature].get_value():>7.2f}"forobjinobjs]print(f"obj temps:",*temps,end="\r")print()# Objects are not heated anymore.print("==== Objects are no longer heated... ====")report_states(objs)# Close environment at the endinput("Demo completed. Press ENTER to shutdown environment.")env.close()if__name__=="__main__":main()
Object Texture Demo
This demo is useful for...
Understanding how different object states can result in texture changes
Understanding how to enable objects with texture-changing states
Understanding how to dynamically modify object states
importnumpyasnpimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgm,macrosfromomnigibson.systemsimportWaterSystemfromomnigibson.utils.constantsimportParticleModifyMethod# Make sure object states are enabledgm.ENABLE_OBJECT_STATES=Truedefmain():# Create the scene config to load -- empty scene plus a light and a cabinetcfg={"scene":{"type":"Scene","floor_plane_visible":True,},"objects":[{"type":"LightObject","name":"light","light_type":"Sphere","radius":0.01,"intensity":1e5,"position":[-2.0,-2.0,1.0],},{"type":"DatasetObject","name":"cabinet","category":"bottom_cabinet","model":"45087","abilities":{"freezable":{},"cookable":{},"burnable":{},"saturable":{},"toggleable":{},"particleRemover":{"method":ParticleModifyMethod.ADJACENCY,"conditions":{# For a specific particle system, this specifies what conditions are required in order for the# particle applier / remover to apply / remover particles associated with that system# The list should contain functions with signature condition() --> bool,# where True means the condition is satisfied# In this case, we only allow our cabinet to absorb water, with no conditions needed.# This is needed for the Saturated ("saturable") state so that we can modify the texture# according to the water.# NOTE: This will only change color if gm.ENABLE_HQ_RENDERING and gm.USE_GPU_DYNAMICS is# enabled!WaterSystem:[],},},},"position":[0,0,0.55],},],}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Set camera to appropriate viewing poseog.sim.viewer_camera.set_position_orientation(position=np.array([1.7789,-1.68822,1.13551]),orientation=np.array([0.57065614,0.20331904,0.267029,0.74947212]),)# Grab reference to object of interestobj=env.scene.object_registry("name","cabinet")# Make sure all the appropriate states are in the objectassertobject_states.Frozeninobj.statesassertobject_states.Cookedinobj.statesassertobject_states.Burntinobj.statesassertobject_states.Saturatedinobj.statesassertobject_states.ToggledOninobj.statesdefreport_states():# Make sure states are propagated before printingforiinrange(5):env.step(np.array([]))print("="*20)print("temperature:",obj.states[object_states.Temperature].get_value())print("obj is frozen:",obj.states[object_states.Frozen].get_value())print("obj is cooked:",obj.states[object_states.Cooked].get_value())print("obj is burnt:",obj.states[object_states.Burnt].get_value())print("obj is soaked:",obj.states[object_states.Saturated].get_value(WaterSystem))print("obj is toggledon:",obj.states[object_states.ToggledOn].get_value())print("obj textures:",obj.get_textures())# Report default statesprint("==== Initial state ====")report_states()# Notify user that we're about to freeze the object, and then freeze the objectinput("\nObject will be frozen. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(-50)report_states()# Notify user that we're about to cook the object, and then cook the objectinput("\nObject will be cooked. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(100)report_states()# Notify user that we're about to burn the object, and then burn the objectinput("\nObject will be burned. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(250)report_states()# Notify user that we're about to reset the object to its default state, and then reset stateinput("\nObject will be reset to default state. Press ENTER to continue.")obj.states[object_states.Temperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)obj.states[object_states.MaxTemperature].set_value(macros.object_states.temperature.DEFAULT_TEMPERATURE)report_states()# Notify user that we're about to soak the object, and then soak the objectinput("\nObject will be saturated with water. Press ENTER to continue.")obj.states[object_states.Saturated].set_value(WaterSystem,True)report_states()# Notify user that we're about to unsoak the object, and then unsoak the objectinput("\nObject will be unsaturated with water. Press ENTER to continue.")obj.states[object_states.Saturated].set_value(WaterSystem,False)report_states()# Notify user that we're about to toggle on the object, and then toggle on the objectinput("\nObject will be toggled on. Press ENTER to continue.")obj.states[object_states.ToggledOn].set_value(True)report_states()# Notify user that we're about to toggle off the object, and then toggle off the objectinput("\nObject will be toggled off. Press ENTER to continue.")obj.states[object_states.ToggledOn].set_value(False)report_states()# Close environment at the endinput("Demo completed. Press ENTER to shutdown environment.")env.close()if__name__=="__main__":main()
Particle Applier and Remover Demo
This demo is useful for...
Understanding how a ParticleRemover or ParticleApplier object can be generated
Understanding how particles can be dynamically generated on objects
Understanding different methods for applying and removing particles via the ParticleRemover or ParticleApplier object
This demo loads in a washtowel and table and lets you choose the ability configuration to enable the washtowel with. The washtowel will then proceed to either remove and generate particles dynamically on the table while moving.
importloggingimportnumpyasnpimportomnigibsonasogfromomnigibson.object_statesimportCoveredfromomnigibson.objectsimportDatasetObjectfromomnigibson.macrosimportgm,macrosfromomnigibson.systemsimport*fromomnigibson.utils.usd_utilsimportcreate_jointfromomnigibson.utils.ui_utilsimportchoose_from_optionsfromomnigibson.utils.constantsimportParticleModifyMethodfromomni.isaac.core.utils.stageimportadd_reference_to_stagefrompxrimportGf# Set macros for this examplemacros.object_states.particle_modifier.VISUAL_PARTICLES_REMOVAL_LIMIT=1000macros.object_states.particle_modifier.FLUID_PARTICLES_REMOVAL_LIMIT=8000macros.object_states.particle_modifier.MAX_VISUAL_PARTICLES_APPLIED_PER_STEP=10macros.object_states.particle_modifier.MAX_FLUID_PARTICLES_APPLIED_PER_STEP=40StainSystem._N_PARTICLES_PER_GROUP=300# Make sure object states and GPU dynamics are enabled (GPU dynamics needed for fluids)gm.ENABLE_OBJECT_STATES=Truegm.USE_GPU_DYNAMICS=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo of ParticleApplier and ParticleRemover object states, which enable objects to either apply arbitrary particles and remove arbitrary particles from the simulator, respectively. Loads an empty scene with a table, and starts clean to allow particles to be applied or pre-covers the table with particles to be removed. The ParticleApplier / ParticleRemover state is applied to an imported cloth object and allowed to interact with the table, applying / removing particles from the table. """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Choose what configuration to loadmodifier_type=choose_from_options(options={"particleApplier":"Demo object's ability to apply particles in the simulator","particleRemover":"Demo object's ability to remove particles from the simulator",},name="particle modifier type",random_selection=random_selection,)modification_metalink={"particleApplier":"particleapplication_link","particleRemover":"particleremover_link",}particle_mapping={system.name:systemforsystemin[StainSystem,WaterSystem]}particle_type=choose_from_options(options={name:f"{name} particles will be applied or removed from the simulator"fornameinparticle_mapping},name="particle type",random_selection=random_selection,)particle_system=particle_mapping[particle_type]modification_method={"Adjacency":ParticleModifyMethod.ADJACENCY,"Projection":ParticleModifyMethod.PROJECTION,}projection_mesh_params={"Adjacency":None,"Projection":{# Either Cone or Cylinder; shape of the projection where particles can be applied / removed"type":"Cone",# Size of the cone"extents":np.array([0.375,0.375,0.75]),},}method_type=choose_from_options(options={"Adjacency":"Close proximity to the object will be used to determine whether particles can be applied / removed","Projection":"A Cone or Cylinder shape protruding from the object will be used to determine whether particles can be applied / removed",},name="modifier method type",random_selection=random_selection,)# Create the ability kwargs to pass to the object stateabilities={modifier_type:{"method":modification_method[method_type],"conditions":{# For a specific particle system, this specifies what conditions are required in order for the# particle applier / remover to apply / remover particles associated with that system# The list should contain functions with signature condition() --> bool,# where True means the condition is satisifiedparticle_system:[],},"projection_mesh_params":projection_mesh_params[method_type],}}# Define objects to load: a light, table, and clothlight_cfg=OrderedDict(type="LightObject",name="light",light_type="Sphere",radius=0.01,intensity=1e5,position=[-2.0,-2.0,2.0],)table_cfg=OrderedDict(type="DatasetObject",name="table",category="breakfast_table",model="265851637a59eb2f882f822c83877cbc",scale=[4.0,4.0,4.0],position=[0,0,0.7],)# Create the scene config to load -- empty scene with a light and tablecfg={"scene":{"type":"Scene",},"objects":[light_cfg,table_cfg],}# Sanity check inputs: Remover + Adjacency + Fluid will not work because we are using a visual_only# object, so contacts will not be triggered with this object# Load the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Grab references to tabletable=env.scene.object_registry("name","table")# Set the viewer camera appropriatelyog.sim.viewer_camera.set_position_orientation(position=np.array([-1.11136405,-1.12709412,1.99587299]),orientation=np.array([0.44662832,-0.17829795,-0.32506992,0.81428652]),)# Let objects settle firstfor_inrange(10):env.step(np.array([]))# If we're using a projection volume, we manually add in the required metalink required in order to use the volumemodifier=DatasetObject(prim_path="/World/modifier",name="modifier",category="dishtowel",model="Tag_Dishtowel_Basket_Weave_Red",scale=np.ones(3)*2.0,visual_only=method_type=="Projection"orparticle_system==StainSystem,# Fluid + adjacency requires the object to have collision geoms activeabilities=abilities,)modifier_root_link_path=f"{modifier.prim_path}/base_link"modifier._prim=modifier._load(og.sim)ifmethod_type=="Projection":metalink_path=f"{modifier.prim_path}/{modification_metalink[modifier_type]}"og.sim.stage.DefinePrim(metalink_path,"Xform")joint_prim=create_joint(prim_path=f"{modifier_root_link_path}/{modification_metalink[modifier_type]}_joint",body0=modifier_root_link_path,body1=metalink_path,joint_type="FixedJoint",enabled=True,)local_area_quat=np.array([0,0.707,0,0.707])# Needs to rotated so the metalink points downwards from clothjoint_prim.GetAttribute("physics:localRot0").Set(Gf.Quatf(*(local_area_quat[[3,0,1,2]])))modifier._post_load()modifier._loaded=Trueog.sim.import_object(modifier)modifier.set_position(np.array([0,0,5.0]))# Take a step to make sure all objects are properly initializedfor_inrange(25):env.step(np.array([]))# If we're removing particles, set the table's covered state to be Trueifmodifier_type=="particleRemover":table.states[Covered].set_value(particle_system,True)# Take a few steps to let particles settlefor_inrange(25):env.step(np.array([]))# Enable camera teleoperation for convenienceog.sim.enable_viewer_camera_teleoperation()# Set the modifier object to be in position to modify particlesifmethod_type=="Projection":# Higher z to showcase projection volume at workz=1.85elifparticle_system==StainSystem:# Lower z needed to allow for adjacency bounding box to overlap properlyz=1.175else:# Higher z needed for actual physical interaction to accomodate non-negligible particle radiusz=1.22modifier.keep_still()modifier.set_position_orientation(position=np.array([0,0.3,z]),orientation=np.array([0,0,0,1.0]),)# Move object in square around tabledeltas=[[150,np.array([-0.01,0,0])],[60,np.array([0,-0.01,0])],[150,np.array([0.01,0,0])],[60,np.array([0,0.01,0])],]fort,deltaindeltas:foriinrange(t):modifier.set_position(modifier.get_position()+delta)# env.step(np.array([]))og.sim.step()# Always shut down environment at the endenv.close()if__name__=="__main__":main()
Kinematics Demo
This demo is useful for...
Understanding how to dynamically sample kinematic states for BEHAVIOR dataset objects
Understanding how to import additional objects after the environment is created
This demo procedurally generates a mini populated scene, spawning in a cabinet and placing boxes in its shelves, and then generating a microwave on a cabinet with a plate and apples sampled both inside and on top of it.
importloggingimportosimportnumpyasnpimportomnigibsonasogfromomnigibsonimportobject_statesfromomnigibson.macrosimportgmfromomnigibson.objectsimportDatasetObject# Make sure object states and global contact reporting are enabledgm.ENABLE_OBJECT_STATES=Truegm.ENABLE_GLOBAL_CONTACT_REPORTING=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo to use the raycasting-based sampler to load objects onTop and/or inside another Loads a cabinet, a microwave open on top of it, and two plates with apples on top, one inside and one on top of the cabinet Then loads a shelf and cracker boxes inside of it """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Create the scene config to load -- empty scenecfg={"scene":{"type":"Scene",}}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)env.step([])# Sample microwave and boxessample_boxes_on_shelf(env)sample_microwave_plates_apples(env)max_steps=100ifshort_execelse-1step=0whilestep!=max_steps:env.step(np.array([]))step+=1# Always close environment at the endenv.close()defsample_microwave_plates_apples(env):# Load cabinet, set position manually, and step 100 timeslogging.info("Loading cabinet and microwave")microwave=DatasetObject(prim_path="/World/microwave",name="microwave",category="microwave",model="7128",scale=0.5,)og.sim.import_object(microwave)microwave.set_position(np.array([0,0,5.0]))env.step(np.array([]))# One step is needed for the object to be fully initializedcabinet=DatasetObject(prim_path="/World/cabinet",name="cabinet",category="bottom_cabinet",model="46380",)og.sim.import_object(cabinet)z_offset=-cabinet.aabb_center[2]+cabinet.aabb_extent[2]/2cabinet.set_position(np.array([1.0,0,z_offset]))env.step(np.array([]))# One step is needed for the object to be fully initialized# Set microwave on top of the cabinet, open it, and step 100 timeslogging.info("Placing microwave OnTop of the cabinet")assertmicrowave.states[object_states.OnTop].set_value(cabinet,True,use_ray_casting_method=True)assertmicrowave.states[object_states.Open].set_value(True)logging.info("Microwave loaded and placed")for_inrange(50):env.step(np.array([]))logging.info("Loading plates")n_plates=2n_apples=2foriinrange(n_plates):plate=DatasetObject(prim_path=f"/World/plate{i}",name=f"plate{i}",category="plate",model="plate_000",bounding_box=np.array([0.25,0.25,0.05]),)og.sim.import_object(plate)env.step(np.array([]))# One step is needed for the object to be fully initialized# Put the 1st plate in the microwaveifi==0:logging.info("Loading plate Inside the microwave")assertplate.states[object_states.Inside].set_value(microwave,True,use_ray_casting_method=True)else:logging.info("Loading plate OnTop the microwave")assertplate.states[object_states.OnTop].set_value(microwave,True,use_ray_casting_method=True)logging.info("Plate %d loaded and placed."%i)for_inrange(50):env.step(np.array([]))logging.info("Loading three apples OnTop of the plate")forjinrange(n_apples):apple=DatasetObject(prim_path=f"/World/apple{i*n_apples+j}",name=f"apple{i*n_apples+j}",category="apple",model="00_0",)og.sim.import_object(apple)env.step(np.array([]))# One step is needed for the object to be fully initializedassertapple.states[object_states.OnTop].set_value(plate,True,use_ray_casting_method=True)logging.info("Apple %d loaded and placed."%j)for_inrange(50):env.step(np.array([]))defsample_boxes_on_shelf(env):shelf=DatasetObject(prim_path=f"/World/shelf",name=f"shelf",category="shelf",model="1170df5b9512c1d92f6bce2b7e6c12b7",bounding_box=np.array([1.0,0.4,2.0]),)og.sim.import_object(shelf)z_offset=-shelf.aabb_center[2]+shelf.aabb_extent[2]/2shelf.set_position(np.array([-1.0,0,z_offset]))env.step(np.array([]))# One step is needed for the object to be fully initializedlogging.info("Shelf placed")for_inrange(50):env.step(np.array([]))foriinrange(5):box=DatasetObject(prim_path=f"/World/box{i}",name=f"box{i}",category="cracker_box",model="cracker_box_000",bounding_box=np.array([0.2,0.05,0.3]),)og.sim.import_object(box)env.step(np.array([]))# One step is needed for the object to be fully initializedbox.states[object_states.Inside].set_value(shelf,True,use_ray_casting_method=True)logging.info(f"Box {i} placed.")for_inrange(50):env.step(np.array([]))if__name__=="__main__":main()
Slicing Demo
This demo is useful for...
Understanding how slicing works in OmniGibson
Understanding how to access individual objects once the environment is created
importloggingimportnumpyasnpfromcollectionsimportOrderedDictimportomnigibsonasogfromomnigibson.macrosimportgmimportomnigibson.utils.transform_utilsasT# Make sure object states, contact reporting, and transition rules are enabledgm.ENABLE_OBJECT_STATES=Truegm.ENABLE_GLOBAL_CONTACT_REPORTING=Truegm.ENABLE_TRANSITION_RULES=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Demo to use the raycasting-based sampler to load objects onTop and/or inside another Loads a cabinet, a microwave open on top of it, and two plates with apples on top, one inside and one on top of the cabinet Then loads a shelf and cracker boxes inside of it """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Create the scene config to load -- empty scene with table, knife, and appletable_cfg=OrderedDict(type="DatasetObject",name="table",category="breakfast_table",model="19203",scale=0.9,position=[0,0,0.532],)apple_cfg=OrderedDict(type="DatasetObject",name="apple",category="apple",model="00_0",scale=1.5,position=[0.085,0,0.90],)knife_cfg=OrderedDict(type="DatasetObject",name="knife",category="table_knife",model="4",scale=2.5,position=[0,0,10.0],)light0_cfg=OrderedDict(type="LightObject",name="light0",light_type="Sphere",radius=0.01,intensity=4000.0,position=[1.217,-0.848,1.388],)light1_cfg=OrderedDict(type="LightObject",name="light1",light_type="Sphere",radius=0.01,intensity=4000.0,position=[-1.217,0.848,1.388],)cfg={"scene":{"type":"Scene",},"objects":[table_cfg,apple_cfg,knife_cfg,light0_cfg,light1_cfg]}# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Grab reference to apple and knifeapple=env.scene.object_registry("name","apple")knife=env.scene.object_registry("name","knife")# Update the simulator's viewer camera's pose so it points towards the tableog.sim.viewer_camera.set_position_orientation(position=np.array([0.544888,-0.412084,1.11569]),orientation=np.array([0.54757518,0.27792802,0.35721896,0.70378409]),)# Let apple settlefor_inrange(50):env.step(np.array([]))knife.keep_still()knife.set_position_orientation(position=apple.get_position()+np.array([-0.15,0.0,0.2]),orientation=T.euler2quat([-np.pi/2,0,0]),)input("The knife will fall on the apple and slice it. Press [ENTER] to continue.")# Step simulation for a bit so that apple is slicedforiinrange(1000):env.step(np.array([]))input("Apple has been sliced! Press [ENTER] to terminate the demo.")# Always close environment at the endenv.close()if__name__=="__main__":main()
🤖 Robots
These examples showcase how to interact and leverage robot objects in OmniGibson.
Robot Visualizer Demo
This demo is useful for...
Understanding how to load a robot into OmniGibson after an environment is created
importloggingimportnumpyasnpimportomnigibsonasogfromomnigibson.robotsimportREGISTERED_ROBOTSfromomnigibson.scenesimportScenedefmain(random_selection=False,headless=False,short_exec=False):""" Robot demo Loads all robots in an empty scene, generate random actions """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Create empty scene with no robots in it initiallycfg={"scene":{"type":"Scene",}}env=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Iterate over all robots and demo their motionforrobot_name,robot_clsinREGISTERED_ROBOTS.items():# Create and import robotrobot=robot_cls(prim_path=f"/World/{robot_name}",name=robot_name,obs_modalities=[],# We're just moving robots around so don't load any observation modalities)og.sim.import_object(robot)# At least one step is always needed while sim is playing for any imported object to be fully initializedog.sim.play()og.sim.step()# Reset robot and make sure it's not movingrobot.reset()robot.keep_still()# Log informationlogging.info(f"Loaded {robot_name}")logging.info(f"Moving {robot_name}")ifnotheadless:# Set viewer in front facing robotog.sim.viewer_camera.set_position_orientation(position=np.array([2.69918369,-3.63686664,4.57894564]),orientation=np.array([0.39592411,0.1348514,0.29286304,0.85982]),)og.sim.enable_viewer_camera_teleoperation()# Hold still briefly so viewer can see robotfor_inrange(100):og.sim.step()# Then apply random actions for a bitfor_inrange(30):action=np.random.uniform(-1,1,robot.action_dim)for_inrange(10):env.step(action)# Re-import the sceneog.sim.stop()og.sim.import_scene(Scene())# Always shut down the environment cleanly at the endenv.close()if__name__=="__main__":main()
Robot Control Demo
This demo is useful for...
Understanding how different controllers can be used to control robots
Understanding how to teleoperate a robot through external commands
"""Example script demo'ing robot control.Options for random actions, as well as selection of robot action space"""importloggingfromcollectionsimportOrderedDictimportnumpyasnpimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.robotsimportREGISTERED_ROBOTSfromomnigibson.utils.ui_utilsimportchoose_from_options,KeyboardRobotControllerCONTROL_MODES=OrderedDict(random="Use autonomous random actions (default)",teleop="Use keyboard control",)SCENES=OrderedDict(Rs_int="Realistic interactive home environment (default)",empty="Empty environment with no objects",)# Don't use GPU dynamics and Use flatcache for performance boostgm.USE_GPU_DYNAMICS=Falsegm.ENABLE_FLATCACHE=Truedefchoose_controllers(robot,random_selection=False):""" For a given robot, iterates over all components of the robot, and returns the requested controller type for each component. :param robot: BaseRobot, robot class from which to infer relevant valid controller options :param random_selection: bool, if the selection is random (for automatic demo execution). Default False :return OrderedDict: Mapping from individual robot component (e.g.: base, arm, etc.) to selected controller names """# Create new dict to store responses from usercontroller_choices=OrderedDict()# Grab the default controller config so we have the registry of all possible controller optionsdefault_config=robot._default_controller_config# Iterate over all components in robotforcomponent,controller_optionsindefault_config.items():# Select controlleroptions=list(sorted(controller_options.keys()))choice=choose_from_options(options=options,name="{} controller".format(component),random_selection=random_selection)# Add to user responsescontroller_choices[component]=choicereturncontroller_choicesdefmain(random_selection=False,headless=False,short_exec=False):""" Robot control demo with selection Queries the user to select a robot, the controllers, a scene and a type of input (random actions or teleop) """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Choose scene to loadscene_model=choose_from_options(options=SCENES,name="scene",random_selection=random_selection)# Choose robot to createrobot_name=choose_from_options(options=list(sorted(REGISTERED_ROBOTS.keys())),name="robot",random_selection=random_selection)# Create the config for generating the environment we wantscene_cfg=OrderedDict()ifscene_model=="empty":scene_cfg["type"]="Scene"else:scene_cfg["type"]="InteractiveTraversableScene"scene_cfg["scene_model"]=scene_model# Add the robot we want to loadrobot0_cfg=OrderedDict()robot0_cfg["type"]=robot_namerobot0_cfg["obs_modalities"]=["rgb","depth","seg_instance","normal","scan","occupancy_grid"]robot0_cfg["action_type"]="continuous"robot0_cfg["action_normalize"]=True# Compile configcfg=OrderedDict(scene=scene_cfg,robots=[robot0_cfg])# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Choose robot controller to userobot=env.robots[0]controller_choices=choose_controllers(robot=robot,random_selection=random_selection)# Choose control modeifrandom_selection:control_mode="random"else:control_mode=choose_from_options(options=CONTROL_MODES,name="control mode")# Update the control mode of the robotcontroller_config={component:{"name":name}forcomponent,nameincontroller_choices.items()}robot.reload_controllers(controller_config=controller_config)# Update the simulator's viewer camera's pose so it points towards the robotog.sim.viewer_camera.set_position_orientation(position=np.array([1.46949,-3.97358,2.21529]),orientation=np.array([0.56829048,0.09569975,0.13571846,0.80589577]),)# Reset environmentenv.reset()# Create teleop controlleraction_generator=KeyboardRobotController(robot=robot)# Print out relevant keyboard info if using keyboard teleopifcontrol_mode=="teleop":action_generator.print_keyboard_teleop_info()# Other helpful user infoprint("Running demo.")print("Press ESC to quit")# Loop control until user quitsmax_steps=-1ifnotshort_execelse100step=0whilestep!=max_steps:action=action_generator.get_random_action()ifcontrol_mode=="random"elseaction_generator.get_teleop_action()for_inrange(10):env.step(action=action)step+=1# Always shut down the environment cleanly at the endenv.close()if__name__=="__main__":main()
Robot Grasping Demo
This demo is useful for...
Understanding the difference between physical and sticky (1) grasping
Understanding how to teleoperate a robot through external commands
physical means natural friction is required to hold objects, sticky means that objects are constrained to the robot's gripper once contact is made
This demo lets you choose a grasping mode and then loads a Fetch robot and a cube on a table. You can then teleoperate the robot to grasp the cube, observing the difference is grasping behavior based on the grasping mode chosen.
"""Example script demo'ing robot manipulation control with grasping."""importloggingfromcollectionsimportOrderedDictimportnumpyasnpimportomnigibsonasogfromomnigibson.macrosimportgmfromomnigibson.utils.ui_utilsimportchoose_from_options,KeyboardRobotControllerGRASPING_MODES=OrderedDict(sticky="Sticky Mitten - Objects are magnetized when they touch the fingers and a CLOSE command is given",physical="Physical Grasping - No additional grasping assistance applied",)# Don't use GPU dynamics and Use flatcache for performance boostgm.USE_GPU_DYNAMICS=Falsegm.ENABLE_FLATCACHE=Truedefmain(random_selection=False,headless=False,short_exec=False):""" Robot grasping mode demo with selection Queries the user to select a type of grasping mode and GUI """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)# Choose type of graspinggrasping_mode=choose_from_options(options=GRASPING_MODES,name="grasping mode",random_selection=random_selection)# Create environment configuration to usescene_cfg=OrderedDict(type="Scene")robot0_cfg=OrderedDict(type="Fetch",obs_modalities=["rgb"],# we're just doing a grasping demo so we don't need all observation modalitiesaction_type="continuous",action_normalize=True,grasping_mode=grasping_mode,)# Define objects to loadtable_cfg=OrderedDict(type="DatasetObject",name="table",category="breakfast_table",model="1b4e6f9dd22a8c628ef9d976af675b86",bounding_box=[0.5,0.5,0.8],fit_avg_dim_volume=False,fixed_base=True,position=[0.7,-0.1,0.6],orientation=[0,0,0.707,0.707],)chair_cfg=OrderedDict(type="DatasetObject",name="chair",category="straight_chair",model="2a8d87523e23a01d5f40874aec1ee3a6",bounding_box=None,fit_avg_dim_volume=True,fixed_base=False,position=[0.45,0.65,0.425],orientation=[0,0,-0.9990215,-0.0442276],)box_cfg=OrderedDict(type="PrimitiveObject",name="box",primitive_type="Cube",rgba=[1.0,0,0,1.0],size=0.05,position=[0.53,-0.1,0.97],)# Compile configcfg=OrderedDict(scene=scene_cfg,robots=[robot0_cfg],objects=[table_cfg,chair_cfg,box_cfg])# Create the environmentenv=og.Environment(configs=cfg,action_timestep=1/60.,physics_timestep=1/60.)# Reset the robotrobot=env.robots[0]robot.set_position([0,0,0])robot.reset()robot.keep_still()# Update the simulator's viewer camera's pose so it points towards the robotog.sim.viewer_camera.set_position_orientation(position=np.array([-2.39951,2.26469,2.66227]),orientation=np.array([-0.23898481,0.48475231,0.75464013,-0.37204802]),)# Create teleop controlleraction_generator=KeyboardRobotController(robot=robot)# Print out relevant keyboard info if using keyboard teleopaction_generator.print_keyboard_teleop_info()# Other helpful user infoprint("Running demo with grasping mode {}.".format(grasping_mode))print("Press ESC to quit")# Loop control until user quitsmax_steps=-1ifnotshort_execelse100step=0whilestep!=max_steps:action=action_generator.get_random_action()ifrandom_selectionelseaction_generator.get_teleop_action()for_inrange(10):env.step(action)step+=1# Always shut down the environment cleanly at the endenv.close()if__name__=="__main__":main()
🧰 Simulator
These examples showcase useful functionality from OmniGibson's monolithic Simulator object.
What's the difference between Environment and Simulator?
handles importing scenes and objects into the actual simulation
directly interfaces with the underlying physics engine
The Environment class thinly wraps the Simulator's core functionality, by:
providing convenience functions for automatically importing a predefined scene, object(s), and robot(s) (via the cfg argument), as well as a task
providing a OpenAI Gym interface for stepping through the simulation
While most of the core functionality in Environment (as well as more fine-grained physics control) can be replicated via direct calls to Simulator (og.sim), it requires deeper understanding of OmniGibson's infrastructure and is not recommended for new users.
State Saving and Loading Demo
This demo is useful for...
Understanding how to interact with objects using the mouse
Understanding how to save the active simulator state to a file
Understanding how to restore the simulator state from a given file
This demo loads a stripped-down scene with the Turtlebot robot, and lets you interact with objects to modify the scene. The state is then saved, written to a .json file, and then restored in the simulation.
importosimportloggingimportnumpyasnpimportomnigibsonasogfromomnigibson.utils.ui_utilsimportKeyboardEventHandlerimportcarbTEST_OUT_PATH=""# Define output directory here.defmain(random_selection=False,headless=False,short_exec=False):""" Prompts the user to select whether they are saving or loading an environment, and interactively shows how an environment can be saved or restored. """logging.info("*"*80+"\nDescription:"+main.__doc__+"*"*80)cfg={"scene":{"type":"InteractiveTraversableScene","scene_model":"Rs_int","load_object_categories":["floors","walls","bed","bottom_cabinet","chair"],},"robots":[{"type":"Turtlebot","obs_modalities":["rgb","depth"],},],}# Create the environmentenv=og.Environment(configs=cfg)# Set the camera to a good angledefset_camera_pose():og.sim.viewer_camera.set_position_orientation(position=np.array([-0.229375,-3.40576,7.26143]),orientation=np.array([0.27619733,-0.00230233,-0.00801152,0.9610648]),)set_camera_pose()# Give user instructions, and then loop until completedcompleted=short_execifnotshort_execandnotrandom_selection:# Notify user to manipulate environment until ready, then press Z to exitprint()print("Modify the scene by SHIFT + left clicking objects and dragging them. Once finished, press Z.")# Register callback so user knows to press space once they're done manipulating the scenedefcomplete_loop():nonlocalcompletedcompleted=TrueKeyboardEventHandler.add_keyboard_callback(carb.input.KeyboardInput.Z,complete_loop)whilenotcompleted:env.step(np.random.uniform(-1,1,env.robots[0].action_dim))print("Completed scene modification, saving scene...")save_path=os.path.join(TEST_OUT_PATH,"saved_stage.json")og.sim.save(json_path=save_path)print("Re-loading scene...")og.sim.restore(json_path=save_path)# Take a sim step and playog.sim.step()og.sim.play()set_camera_pose()# Loop until user terminatescompleted=short_execifnotshort_execandnotrandom_selection:# Notify user to manipulate environment until ready, then press Z to exitprint()print("View reloaded scene. Once finished, press Z.")# Register callback so user knows to press space once they're done manipulating the sceneKeyboardEventHandler.add_keyboard_callback(carb.input.KeyboardInput.Z,complete_loop)whilenotcompleted:env.step(np.zeros(env.robots[0].action_dim))# Shutdown omnigibson at the endog.shutdown()if__name__=="__main__":main()