BEHAVIOR is a simulation benchmark to evaluate Embodied AI solutions.

Embodied artificial intelligence (EAI) is advancing. But where are we now? We propose to test EAI agents with the physical challenges humans solve in their everyday life: household activities such as picking up toys, setting the table, or cleaning floors. BEHAVIOR is a challenge in simulation where EAI agents need to plan and execute navigation and manipulation strategies based on sensor information to fulfill 100 household activities.

BEHAVIOR tests the ability of agents to perceive the environment, plan, and execute complex long-horizon activities that involve multiple objects, rooms, and state changes, all with the reproducibility, safety and observability offered by a realistic physics simulation. To compare the performance of EAI agents to that of humans, we have collected human demonstrations in the same tasks and environments using virtual reality. The demonstrations serve as reference to compare EAI solutions, but they also be used to develop them.

What makes BEHAVIOR different?

100 Household Activities in Realistically Simulated Homes

including cleaning, preparing food, tidying, polishing, installing elements, etc. The activities obtained from the American Time Use Survey and approximate the real distribution of tasks performed by humans in their everyday lives.

Activity list | Activity images and videos

Decision Making based on Onboard Sensing for Navigation and Manipulation

the long-horizon activities require to understand the scene, plan a strategy and execute it controlling the motion of the embodied agent, all based on the virtual sensor signals generated by onboard sensors such as RGB-D cameras and position encoders; as close as it gets to the challenges of real-world.

Benchmark documentation

More Complex Interactions than just Pick-and-Place

accomplishing the BEHAVIOR activities require changing more than the position of the objects in the environment: they need to be cooked, frozen, soaked, cleaned, ... All these new types of state changes are supported by the provided simulator, iGibson 2.0, and enable completely new types of activities.

More about the simulator iGibson2