What is BEHAVIOR?
BEHAVIOR is a simulation benchmark to evaluate Embodied AI solutions.
Embodied artificial intelligence (EAI) is advancing. But where are we now? We propose to test EAI agents with the physical challenges humans solve in their everyday life: household activities such as picking up toys, setting the table, or cleaning floors. BEHAVIOR is a benchmark in simulation where EAI agents need to plan and execute navigation and manipulation strategies based on sensor information to fulfill 100 household activities.
BEHAVIOR tests the ability of agents to perceive the environment, plan, and execute complex long-horizon activities that involve multiple objects, rooms, and state changes, all with the reproducibility, safety and observability offered by a realistic physics simulation. To compare the performance of EAI agents to that of humans, we have collected human demonstrations in the same tasks and environments using virtual reality. The demonstrations serve as reference to compare EAI solutions, but they also be used to develop them.
What makes BEHAVIOR different?
100 Household Activities in Realistically Simulated Homes
including cleaning, preparing food, tidying, polishing, installing elements, etc. The activities obtained from the American Time Use Survey and approximate the real distribution of tasks performed by humans in their everyday lives.
Activity list | Activity images and videosDecision Making based on Onboard Sensing for Navigation and Manipulation
the long-horizon activities require to understand the scene, plan a strategy and execute it controlling the motion of the embodied agent, all based on the virtual sensor signals generated by onboard sensors such as RGB-D cameras and position encoders; as close as it gets to the challenges of real-world.
Benchmark documentationMore Complex Interactions than just Pick-and-Place
accomplishing the BEHAVIOR activities require changing more than the position of the objects in the environment: they need to be cooked, frozen, soaked, cleaned, ... All these new types of state changes are supported by the provided simulator, iGibson 2.0, and enable completely new types of activities.
More about the simulator iGibson2Gettting started
Do you want to benchmark your solution? Follow the instructions here. You will download and install the required infrastructure: a new version of iGibson, our simulation environment for interactive tasks extended now to new object states for BEHAVIOR, the BEHAVIOR Dataset of Objects and the iGibson2.0 Dataset of Scenes (combined in our benchmarking bundle), with object and house models to use the benchmark, and our starter code, with examplest to train againts in the tasks.

References
-
BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments. Sanjana Srivastava*, Chengshu Li*, Michael Lingelbach*, Roberto Martín-Martín*, Fei Xia, Kent Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, C. Karen Liu, Silvio Savarese, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Conference on Robot Learning (CoRL) 2021.
-
iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks. Chengshu Li*, Fei Xia*, Roberto Martín-Martín*, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, C. Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese, Conference on Robot Learning (CoRL) 2021.
-
iGibson 1.0: A Simulation Environment for Interactive Tasks in Large Realistic Scenes. Bokui Shen*, Fei Xia*, Chengshu Li*, Roberto Martín-Martín*, Linxi Fan, Guanzhi Wang, Shyamal Buch, Claudia D’Arpino, Sanjana Srivastava, Lyne P Tchapmi, Micael E Tchapmi, Kent Vainio, Li Fei-Fei, Silvio Savarese, Conference on Intelligent Robots and Systems (IROS) 2021.