BEHAVIOR is a simulation benchmark to evaluate Embodied AI solutions.

Embodied artificial intelligence (EAI) is advancing. But where are we now? We propose to test EAI agents with the physical challenges humans solve in their everyday life: household activities such as picking up toys, setting the table, or cleaning floors. BEHAVIOR is a benchmark in simulation where EAI agents need to plan and execute navigation and manipulation strategies based on sensor information to fulfill 100 household activities.

BEHAVIOR tests the ability of agents to perceive the environment, plan, and execute complex long-horizon activities that involve multiple objects, rooms, and state changes, all with the reproducibility, safety and observability offered by a realistic physics simulation. To compare the performance of EAI agents to that of humans, we have collected human demonstrations in the same tasks and environments using virtual reality. The demonstrations serve as reference to compare EAI solutions, but they also be used to develop them.

What makes BEHAVIOR different?

100 Household Activities in Realistically Simulated Homes

including cleaning, preparing food, tidying, polishing, installing elements, etc. The activities obtained from the American Time Use Survey and approximate the real distribution of tasks performed by humans in their everyday lives.

Activity list | Activity images and videos

Decision Making based on Onboard Sensing for Navigation and Manipulation

the long-horizon activities require to understand the scene, plan a strategy and execute it controlling the motion of the embodied agent, all based on the virtual sensor signals generated by onboard sensors such as RGB-D cameras and position encoders; as close as it gets to the challenges of real-world.

Benchmark documentation

More Complex Interactions than just Pick-and-Place

accomplishing the BEHAVIOR activities require changing more than the position of the objects in the environment: they need to be cooked, frozen, soaked, cleaned, ... All these new types of state changes are supported by the provided simulator, iGibson 2.0, and enable completely new types of activities.

More about the simulator iGibson2

Gettting started

Do you want to benchmark your solution? Follow the instructions here. You will download and install the required infrastructure: a new version of iGibson, our simulation environment for interactive tasks extended now to new object states for BEHAVIOR, the BEHAVIOR Dataset of Objects and the iGibson2.0 Dataset of Scenes (combined in our benchmarking bundle), with object and house models to use the benchmark, and our starter code, with examplest to train againts in the tasks.

Join our community!

Sign up here for latest updates on the challenge and benchmark.

Mailing List


Chengshu (Eric) Li Chengshu (Eric) Li
Sanjana Srivastava Sanjana Srivastava
Michael Lingelbach Michael Lingelbach
Fei Xia Fei Xia
Cem Gokmen Cem Gokmen
Shyamal Buch Shyamal Buch
Image placeholder Ruohan Zhang
Image placeholder Josiah Wong
Roberto Martín-Martín Roberto
Karen Liu Karen Liu
Silvio Savarese Silvio Savarese
Hyowon Gweon Hyowon Gweon
Jiajun Wu Jiajun Wu
Li Fei-Fei Li Fei-Fei