Skip to content

allenact.base_abstractions.task#

[view_source]

Defines the primary data structures by which agents interact with their environment.

Task#

class Task(Generic[EnvType])

[view_source]

An abstract class defining a, goal directed, 'task.' Agents interact with their environment through a task by taking a step after which they receive new observations, rewards, and (potentially) other useful information.

A Task is a helpful generalization of the OpenAI gym's Env class and allows for multiple tasks (e.g. point and object navigation) to be defined on a single environment (e.g. AI2-THOR).

Attributes

  • env: The environment.
  • sensor_suite: Collection of sensors formed from the sensors argument in the initializer.
  • task_info: Dictionary of (k, v) pairs defining task goals and other task information.
  • max_steps: The maximum number of steps an agent can take an in the task before it is considered failed.
  • observation_space: The observation space returned on each step from the sensors.

Task.action_space#

 | @property
 | @abc.abstractmethod
 | action_space() -> gym.Space

[view_source]

Task's action space.

Returns

The action space for the task.

Task.render#

 | @abc.abstractmethod
 | render(mode: str = "rgb", *args, **kwargs) -> np.ndarray

[view_source]

Render the current task state.

Rendered task state can come in any supported modes.

Parameters

  • mode : The mode in which to render. For example, you might have a 'rgb' mode that renders the agent's egocentric viewpoint or a 'dev' mode returning additional information.
  • args : Extra args.
  • kwargs : Extra kwargs.

Returns

An numpy array corresponding to the requested render.

Task.step#

 | step(action: Any) -> RLStepResult

[view_source]

Take an action in the environment (one per agent).

Takes the action in the environment and returns observations (& rewards and any additional information) corresponding to the agent's new state. Note that this function should not be overwritten without care (instead implement the _step function).

Parameters

  • action : The action to take, should be of the same form as specified by self.action_space.

Returns

A RLStepResult object encoding the new observations, reward, and (possibly) additional information.

Task.reached_max_steps#

 | reached_max_steps() -> bool

[view_source]

Has the agent reached the maximum number of steps.

Task.reached_terminal_state#

 | @abc.abstractmethod
 | reached_terminal_state() -> bool

[view_source]

Has the agent reached a terminal state (excluding reaching the maximum number of steps).

Task.is_done#

 | is_done() -> bool

[view_source]

Did the agent reach a terminal state or performed the maximum number of steps.

Task.num_steps_taken#

 | num_steps_taken() -> int

[view_source]

Number of steps taken by the agent in the task so far.

Task.action_names#

 | @deprecated
 | action_names() -> Tuple[str, ...]

[view_source]

Action names of the Task instance.

This function has been deprecated and will be removed.

This function is a hold-over from when the Task abstraction only considered gym.space.Discrete action spaces (in which case it makes sense name these actions).

This implementation of action_names requires that a class_action_names method has been defined. This method should be overwritten if class_action_names requires key word arguments to determine the number of actions.

Task.close#

 | @abc.abstractmethod
 | close() -> None

[view_source]

Closes the environment and any other files opened by the Task (if applicable).

Task.metrics#

 | metrics() -> Dict[str, Any]

[view_source]

Computes metrics related to the task after the task's completion.

By default this function is automatically called during training and the reported metrics logged to tensorboard.

Returns

A dictionary where every key is a string (the metric's name) and the value is the value of the metric.

Task.query_expert#

 | query_expert(**kwargs) -> Tuple[Any, bool]

[view_source]

(Deprecated) Query the expert policy for this task.

The new correct way to include this functionality is through the definition of a class derived from allenact.base_abstractions.sensor.AbstractExpertActionSensor or allenact.base_abstractions.sensor.AbstractExpertPolicySensor, where a query_expert method must be defined.

Returns

A tuple (x, y) where x is the expert action (or policy) and y is False \ if the expert could not determine the optimal action (otherwise True). Here y \ is used for masking. Even when y is False, x should still lie in the space of \ possible values (e.g. if x is the expert policy then x should be the correct length, \ sum to 1, and have non-negative entries).

Task.cumulative_reward#

 | @property
 | cumulative_reward() -> float

[view_source]

Mean per-agent total cumulative in the task so far.

Returns

Mean per-agent cumulative reward as a float.

TaskSampler#

class TaskSampler(abc.ABC)

[view_source]

Abstract class defining a how new tasks are sampled.

TaskSampler.length#

 | @property
 | @abc.abstractmethod
 | length() -> Union[int, float]

[view_source]

Length.

Returns

Number of total tasks remaining that can be sampled. Can be float('inf').

TaskSampler.last_sampled_task#

 | @property
 | @abc.abstractmethod
 | last_sampled_task() -> Optional[Task]

[view_source]

Get the most recently sampled Task.

Returns

The most recently sampled Task.

TaskSampler.next_task#

 | @abc.abstractmethod
 | next_task(force_advance_scene: bool = False) -> Optional[Task]

[view_source]

Get the next task in the sampler's stream.

Parameters

  • force_advance_scene : Used to (if applicable) force the task sampler to use a new scene for the next task. This is useful if, during training, you would like to train with one scene for some number of steps and then explicitly control when you begin training with the next scene.

Returns

The next Task in the sampler's stream if a next task exists. Otherwise None.

TaskSampler.close#

 | @abc.abstractmethod
 | close() -> None

[view_source]

Closes any open environments or streams.

Should be run when done sampling.

TaskSampler.all_observation_spaces_equal#

 | @property
 | @abc.abstractmethod
 | all_observation_spaces_equal() -> bool

[view_source]

Checks if all observation spaces of tasks that can be sampled are equal.

This will almost always simply return True. A case in which it should return False includes, for example, a setting where you design a TaskSampler that can generate different types of tasks, i.e. point navigation tasks and object navigation tasks. In this case, these different tasks may output different types of observations.

Returns

True if all Tasks that can be sampled by this sampler have the same observation space. Otherwise False.

TaskSampler.reset#

 | @abc.abstractmethod
 | reset() -> None

[view_source]

Resets task sampler to its original state (except for any seed).

TaskSampler.set_seed#

 | @abc.abstractmethod
 | set_seed(seed: int) -> None

[view_source]

Sets new RNG seed.

Parameters

  • seed : New seed.