allenact_plugins.ithor_plugin.ithor_tasks
#
ObjectNaviThorGridTask
#
class ObjectNaviThorGridTask(Task[IThorEnvironment])
Defines the object navigation task in AI2-THOR.
In object navigation an agent is randomly initialized into an AI2-THOR scene and must
find an object of a given type (e.g. tomato, television, etc). An object is considered
found if the agent takes an End
action and the object is visible to the agent (see
here for a definition of visibiliy
in AI2-THOR).
The actions available to an agent in this task are:
- Move ahead
- Moves agent ahead by 0.25 meters.
- Rotate left / rotate right
- Rotates the agent by 90 degrees counter-clockwise / clockwise.
- Look down / look up
- Changes agent view angle by 30 degrees up or down. An agent cannot look more than 30 degrees above horizontal or less than 60 degrees below horizontal.
- End
- Ends the task and the agent receives a positive reward if the object type is visible to the agent, otherwise it receives a negative reward.
Attributes
env
: The ai2thor environment.sensor_suite
: Collection of sensors formed from thesensors
argument in the initializer.task_info
: The task info. Must contain a field "object_type" that specifies, as a string, the goal object type.max_steps
: The maximum number of steps an agent can take an in the task before it is considered failed.observation_space
: The observation space returned on each step from the sensors.
ObjectNaviThorGridTask.__init__
#
| __init__(env: IThorEnvironment, sensors: List[Sensor], task_info: Dict[str, Any], max_steps: int, **kwargs, ,) -> None
Initializer.
See class documentation for parameter definitions.
ObjectNaviThorGridTask.is_goal_object_visible
#
| is_goal_object_visible() -> bool
Is the goal object currently visible?
ObjectNaviThorGridTask.judge
#
| judge() -> float
Compute the reward after having taken a step.