Skip to content

core.base_abstractions.task#

[view_source]

Defines the primary data structures by which agents interact with their environment.

Task#

class Task(Generic[EnvType])

[view_source]

An abstract class defining a, goal directed, 'task.' Agents interact with their environment through a task by taking a step after which they receive new observations, rewards, and (potentially) other useful information.

A Task is a helpful generalization of the OpenAI gym's Env class and allows for multiple tasks (e.g. point and object navigation) to be defined on a single environment (e.g. AI2-THOR).

Attributes

  • env: The environment.
  • sensor_suite: Collection of sensors formed from the sensors argument in the initializer.
  • task_info: Dictionary of (k, v) pairs defining task goals and other task information.
  • max_steps: The maximum number of steps an agent can take an in the task before it is considered failed.
  • observation_space: The observation space returned on each step from the sensors.

Task.action_space#

 | @property
 | @abstractmethod
 | action_space() -> gym.Space

[view_source]

Task's action space.

Returns

The action space for the task.

Task.render#

 | @abstractmethod
 | render(mode: str = "rgb", *args, **kwargs) -> np.ndarray

[view_source]

Render the current task state.

Rendered task state can come in any supported modes.

Parameters

  • mode : The mode in which to render. For example, you might have a 'rgb' mode that renders the agent's egocentric viewpoint or a 'dev' mode returning additional information.
  • args : Extra args.
  • kwargs : Extra kwargs.

Returns

An numpy array corresponding to the requested render.

Task.step#

 | step(action: Union[int, Sequence[int]]) -> RLStepResult

[view_source]

Take an action in the environment (one per agent).

Takes the action in the environment corresponding to self.class_action_names()[action] for each action if it's a Sequence and returns observations (& rewards and any additional information) corresponding to the agent's new state. Note that this function should not be overwritten without care (instead implement the _step function).

Parameters

  • action : The action to take.

Returns

A RLStepResult object encoding the new observations, reward, and (possibly) additional information.

Task.reached_max_steps#

 | reached_max_steps() -> bool

[view_source]

Has the agent reached the maximum number of steps.

Task.reached_terminal_state#

 | @abstractmethod
 | reached_terminal_state() -> bool

[view_source]

Has the agent reached a terminal state (excluding reaching the maximum number of steps).

Task.is_done#

 | is_done() -> bool

[view_source]

Did the agent reach a terminal state or performed the maximum number of steps.

Task.num_steps_taken#

 | num_steps_taken() -> int

[view_source]

Number of steps taken by the agent in the task so far.

Task.class_action_names#

 | @classmethod
 | @abstractmethod
 | class_action_names(cls, **kwargs) -> Tuple[str, ...]

[view_source]

A tuple of action names.

Parameters

  • kwargs : Keyword arguments.

Returns

Tuple of (ordered) action names so that taking action running task.step(i) corresponds to taking action task.class_action_names()[i].

Task.action_names#

 | action_names() -> Tuple[str, ...]

[view_source]

Action names of the Task instance.

This method should be overwritten if class_action_names requires key word arguments to determine the number of actions.

Task.total_actions#

 | @property
 | total_actions() -> int

[view_source]

Total number of actions available to an agent in this Task.

Task.index_to_action#

 | index_to_action(index: int) -> str

[view_source]

Returns the action name correspond to index.

Task.close#

 | @abstractmethod
 | close() -> None

[view_source]

Closes the environment and any other files opened by the Task (if applicable).

Task.metrics#

 | metrics() -> Dict[str, Any]

[view_source]

Computes metrics related to the task after the task's completion.

By default this function is automatically called during training and the reported metrics logged to tensorboard.

Returns

A dictionary where every key is a string (the metric's name) and the value is the value of the metric.

Task.query_expert#

 | query_expert(**kwargs) -> Tuple[Any, bool]

[view_source]

Query the expert policy for this task.

Returns

A tuple (x, y) where x is the expert action (or policy) and y is False \ if the expert could not determine the optimal action (otherwise True). Here y \ is used for masking. Even when y is False, x should still lie in the space of \ possible values (e.g. if x is the expert policy then x should be the correct length, \ sum to 1, and have non-negative entries).

Task.cumulative_reward#

 | @property
 | cumulative_reward() -> float

[view_source]

Mean per-agent total cumulative in the task so far.

Returns

Mean per-agent cumulative reward as a float.

TaskSampler#

class TaskSampler(abc.ABC)

[view_source]

Abstract class defining a how new tasks are sampled.

TaskSampler.length#

 | @property
 | @abstractmethod
 | length() -> Union[int, float]

[view_source]

Length.

Returns

Number of total tasks remaining that can be sampled. Can be float('inf').

TaskSampler.total_unique#

 | @property
 | @abstractmethod
 | total_unique() -> Optional[Union[int, float]]

[view_source]

Total unique tasks.

Returns

Total number of unique tasks that can be sampled. Can be float('inf') or, if the total unique is not known, None.

TaskSampler.last_sampled_task#

 | @property
 | @abstractmethod
 | last_sampled_task() -> Optional[Task]

[view_source]

Get the most recently sampled Task.

Returns

The most recently sampled Task.

TaskSampler.next_task#

 | @abstractmethod
 | next_task(force_advance_scene: bool = False) -> Optional[Task]

[view_source]

Get the next task in the sampler's stream.

Parameters

  • force_advance_scene : Used to (if applicable) force the task sampler to use a new scene for the next task. This is useful if, during training, you would like to train with one scene for some number of steps and then explicitly control when you begin training with the next scene.

Returns

The next Task in the sampler's stream if a next task exists. Otherwise None.

TaskSampler.close#

 | @abstractmethod
 | close() -> None

[view_source]

Closes any open environments or streams.

Should be run when done sampling.

TaskSampler.all_observation_spaces_equal#

 | @property
 | @abstractmethod
 | all_observation_spaces_equal() -> bool

[view_source]

Checks if all observation spaces of tasks that can be sampled are equal.

This will almost always simply return True. A case in which it should return False includes, for example, a setting where you design a TaskSampler that can generate different types of tasks, i.e. point navigation tasks and object navigation tasks. In this case, these different tasks may output different types of observations.

Returns

True if all Tasks that can be sampled by this sampler have the same observation space. Otherwise False.

TaskSampler.reset#

 | @abstractmethod
 | reset() -> None

[view_source]

Resets task sampler to its original state (except for any seed).

TaskSampler.set_seed#

 | @abstractmethod
 | set_seed(seed: int) -> None

[view_source]

Sets new RNG seed.

Parameters

  • seed : New seed.