Skip to content




class ObjectNaviThorGridTask(Task[IThorEnvironment])


Defines the object navigation task in AI2-THOR.

In object navigation an agent is randomly initialized into an AI2-THOR scene and must find an object of a given type (e.g. tomato, television, etc). An object is considered found if the agent takes an End action and the object is visible to the agent (see here for a definition of visibiliy in AI2-THOR).

The actions available to an agent in this task are:

  1. Move ahead
    • Moves agent ahead by 0.25 meters.
  2. Rotate left / rotate right
    • Rotates the agent by 90 degrees counter-clockwise / clockwise.
  3. Look down / look up
    • Changes agent view angle by 30 degrees up or down. An agent cannot look more than 30 degrees above horizontal or less than 60 degrees below horizontal.
  4. End
    • Ends the task and the agent receives a positive reward if the object type is visible to the agent, otherwise it receives a negative reward.


  • env: The ai2thor environment.
  • sensor_suite: Collection of sensors formed from the sensors argument in the initializer.
  • task_info: The task info. Must contain a field "object_type" that specifies, as a string, the goal object type.
  • max_steps: The maximum number of steps an agent can take an in the task before it is considered failed.
  • observation_space: The observation space returned on each step from the sensors.


 | __init__(env: IThorEnvironment, sensors: List[Sensor], task_info: Dict[str, Any], max_steps: int, **kwargs, ,) -> None



See class documentation for parameter definitions.


 | is_goal_object_visible() -> bool


Is the goal object currently visible?


 | judge() -> float


Compute the reward after having taken a step.